Skip to content

Commit 2422ba7

Browse files
logarithmgianm
authored andcommitted
Update MMX libraries and replace scala_tools.time (#220)
* * Removed scala 2.10 support * Updated samza and kafka versions * Updated documentation and sbt version * Removed JDK7 since new samza uses JDK8 features * * Update MMX libraries versions * Replaced scala_tools.time by nscala_time since scala_tools.time doesn't support Scala 2.12 and nscala_time is being actively maintained * Fix for sigar * Update twitter util and finagle * Updated scala-util and removed resolver for sigar * Updated plugin versions
1 parent eda0aeb commit 2422ba7

File tree

41 files changed

+242
-234
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

41 files changed

+242
-234
lines changed

.travis.yml

+1-3
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,8 @@
11
language: scala
22
scala:
3-
- 2.10.5
4-
- 2.11.7
3+
- 2.11.8
54
sudo: false
65
jdk:
76
- oraclejdk8
8-
- oraclejdk7
97
script:
108
- sbt ++$TRAVIS_SCALA_VERSION -Dfile.encoding=UTF8 -J-XX:ReservedCodeCacheSize=256M -J-Xms512m -J-Xmx512m test

CONTRIBUTING.md

+3-2
Original file line numberDiff line numberDiff line change
@@ -3,8 +3,9 @@
33
Tranquility generally follows the same contributing guidelines as Druid. See here for Druid's guidelines:
44
https://github.com/druid-io/druid/blob/master/CONTRIBUTING.md
55

6-
The Druid [intellij_formatting.jar](https://github.com/druid-io/druid/raw/master/intellij_formatting.jar) contains a
7-
Scala codestyle as well, which you can use to ensure your Scala code is formatted according to project conventions.
6+
The Druid [druid_intellij_formatting.xml](https://github.com/druid-io/druid/raw/master/druid_intellij_formatting.xml)
7+
contains a Scala codestyle as well, which you can use to ensure your Scala code is formatted according to project
8+
conventions.
89

910
The [Druid CLA](http://druid.io/community/cla.html) applies to Tranquility as well. Please fill it out when
1011
contributing to either project. The same form does apply to both projects, so there is no need to fill it out twice.

README.md

+9-11
Original file line numberDiff line numberDiff line change
@@ -41,35 +41,33 @@ Central to make them easy to include. The current stable versions are:
4141
<dependency>
4242
<groupId>io.druid</groupId>
4343
<artifactId>tranquility-core_2.11</artifactId>
44-
<version>0.8.1</version>
44+
<version>0.9.0</version>
4545
</dependency>
4646
<dependency>
4747
<groupId>io.druid</groupId>
48-
<artifactId>tranquility-samza_2.10</artifactId>
49-
<version>0.8.1</version>
48+
<artifactId>tranquility-samza_2.11</artifactId>
49+
<version>0.9.0</version>
5050
</dependency>
5151
<dependency>
5252
<groupId>io.druid</groupId>
5353
<artifactId>tranquility-spark_2.11</artifactId>
54-
<version>0.8.1</version>
54+
<version>0.9.0</version>
5555
</dependency>
5656
<dependency>
5757
<groupId>io.druid</groupId>
5858
<artifactId>tranquility-storm_2.11</artifactId>
59-
<version>0.8.1</version>
59+
<version>0.9.0</version>
6060
</dependency>
6161
<dependency>
6262
<groupId>io.druid</groupId>
6363
<artifactId>tranquility-flink_2.11</artifactId>
64-
<version>0.8.1</version>
64+
<version>0.9.0</version>
6565
</dependency>
6666
```
6767

6868
You only need to include the modules you are actually using.
6969

70-
All Tranquility modules are built for both Scala 2.10 and 2.11, except for the Samza module, which is only built for
71-
Scala 2.10. If you're using Scala for your own code, you should choose the Tranquility build that matches your version
72-
of Scala. Otherwise, Scala 2.11 is recommended.
70+
All Tranquility modules are built for Scala 2.11.
7371

7472
This version is built to work with Druid 0.7.x and 0.8.x. If you are using Druid 0.6.x, you may want to use Tranquility
7573
v0.3.2, which is the most recent version built for use with Druid 0.6.x.
@@ -85,10 +83,10 @@ run them directly. The distribution also includes the [Core API](docs/core.md) a
8583
them rather than get them through Maven.
8684

8785
The current distribution is:
88-
[tranquility-distribution-0.8.1](http://static.druid.io/tranquility/releases/tranquility-distribution-0.8.1.tgz).
86+
[tranquility-distribution-0.9.0](http://static.druid.io/tranquility/releases/tranquility-distribution-0.9.0.tgz).
8987

9088
To use it, first download it and then unpack it into your directory of choice by running
91-
`tar -xzf tranquility-distribution-0.8.1.tgz`.
89+
`tar -xzf tranquility-distribution-0.9.0.tgz`.
9290

9391
### How to Contribute
9492

build.sbt

+21-26
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,4 @@
1-
scalaVersion := "2.10.6"
2-
3-
crossScalaVersions := Seq("2.10.6", "2.11.8")
1+
scalaVersion in ThisBuild := "2.11.8"
42

53
// Disable parallel execution, the various Druid oriented tests need to claim ports
64
parallelExecution in ThisBuild := false
@@ -18,14 +16,14 @@ val druidVersion = "0.9.2"
1816
val curatorVersion = "2.12.0"
1917
val guiceVersion = "4.0"
2018
val flinkVersion = "1.0.3"
21-
val finagleVersion = "6.31.0"
22-
val twitterUtilVersion = "6.30.0"
23-
val samzaVersion = "0.8.0"
19+
val finagleVersion = "6.43.0"
20+
val twitterUtilVersion = "6.42.0"
21+
val samzaVersion = "0.12.0"
2422
val sparkVersion = "1.6.0"
2523
val scalatraVersion = "2.3.1"
2624
val jettyVersion = "9.2.5.v20141112"
2725
val apacheHttpVersion = "4.3.3"
28-
val kafkaVersion = "0.8.2.2"
26+
val kafkaVersion = "0.10.1.1"
2927
val airlineVersion = "0.7"
3028

3129
def dependOnDruid(artifact: String) = {
@@ -45,8 +43,8 @@ def dependOnDruid(artifact: String) = {
4543
}
4644

4745
val coreDependencies = Seq(
48-
"com.metamx" %% "scala-util" % "1.11.6" exclude("log4j", "log4j") force(),
49-
"com.metamx" % "java-util" % "0.27.10" exclude("log4j", "log4j") force(),
46+
"com.metamx" %% "scala-util" % "1.13.2" exclude("log4j", "log4j") force(),
47+
"com.metamx" % "java-util" % "0.28.2" exclude("log4j", "log4j") force(),
5048
"io.netty" % "netty" % "3.10.5.Final" force(),
5149
"org.apache.curator" % "curator-client" % curatorVersion force(),
5250
"org.apache.curator" % "curator-framework" % curatorVersion force(),
@@ -55,8 +53,8 @@ val coreDependencies = Seq(
5553
"com.twitter" %% "util-core" % twitterUtilVersion force(),
5654
"com.twitter" %% "finagle-core" % finagleVersion force(),
5755
"com.twitter" %% "finagle-http" % finagleVersion force(),
58-
"org.slf4j" % "slf4j-api" % "1.7.12" force() force(),
59-
"org.slf4j" % "jul-to-slf4j" % "1.7.12" force() force(),
56+
"org.slf4j" % "slf4j-api" % "1.7.25" force() force(),
57+
"org.slf4j" % "jul-to-slf4j" % "1.7.25" force() force(),
6058
"org.apache.httpcomponents" % "httpclient" % apacheHttpVersion force(),
6159
"org.apache.httpcomponents" % "httpcore" % apacheHttpVersion force(),
6260

@@ -88,7 +86,7 @@ val loggingDependencies = Seq(
8886
"org.slf4j" % "jul-to-slf4j" % "1.7.12"
8987
)
9088

91-
def flinkDependencies(scalaVersion: String) = {
89+
val flinkDependencies = {
9290
Seq(
9391
"org.apache.flink" %% "flink-streaming-scala" % flinkVersion % "optional"
9492
exclude("log4j", "log4j")
@@ -146,19 +144,21 @@ val coreTestDependencies = Seq(
146144
"org.slf4j" % "jul-to-slf4j" % "1.7.12" % "test"
147145
) ++ loggingDependencies.map(_ % "test")
148146

149-
def flinkTestDependencies(scalaVersion: String) = {
147+
val flinkTestDependencies = {
150148
Seq("org.apache.flink" % "flink-core" % flinkVersion % "test" classifier "tests",
151149
"org.apache.flink" %% "flink-runtime" % flinkVersion % "test" classifier "tests",
152150
"org.apache.flink" %% "flink-test-utils" % flinkVersion % "test"
153151
).map(_ exclude("log4j", "log4j") exclude("org.slf4j", "slf4j-log4j12") force()) ++
154152
loggingDependencies.map(_ % "test")
155153
}
156154

157-
// Force 2.10 here, makes update resolution happy, but since w'ere not building for 2.11
158-
// we won't end up in runtime version hell by doing this.
159-
val samzaTestDependencies = Seq(
160-
"org.apache.samza" % "samza-core_2.10" % samzaVersion % "test"
161-
)
155+
val samzaTestDependencies = {
156+
Seq(
157+
"org.apache.samza" %% "samza-core" % samzaVersion % "test",
158+
"org.apache.samza" %% "samza-kafka" % samzaVersion % "test"
159+
).map(_ exclude("log4j", "log4j") exclude("org.slf4j", "slf4j-log4j12") force()) ++
160+
loggingDependencies.map(_ % "test")
161+
}
162162

163163
val serverTestDependencies = Seq(
164164
"org.scalatra" %% "scalatra-test" % scalatraVersion % "test"
@@ -204,8 +204,8 @@ lazy val commonSettings = Seq(
204204
</developers>),
205205

206206
fork in Test := true
207-
) ++ releaseSettings ++ net.virtualvoid.sbt.graph.Plugin.graphSettings ++ Seq(
208-
ReleaseKeys.publishArtifactsAction := PgpKeys.publishSigned.value
207+
) ++ Seq(
208+
releasePublishArtifactsAction := PgpKeys.publishSigned.value
209209
)
210210

211211
lazy val root = project.in(file("."))
@@ -222,8 +222,7 @@ lazy val core = project.in(file("core"))
222222
lazy val flink = project.in(file("flink"))
223223
.settings(commonSettings: _*)
224224
.settings(name := "tranquility-flink")
225-
.settings(libraryDependencies <++= scalaVersion(flinkDependencies))
226-
.settings(libraryDependencies <++= scalaVersion(flinkTestDependencies))
225+
.settings(libraryDependencies ++= (flinkDependencies ++ flinkTestDependencies))
227226
.dependsOn(core % "test->test;compile->compile")
228227

229228
lazy val spark = project.in(file("spark"))
@@ -243,10 +242,6 @@ lazy val samza = project.in(file("samza"))
243242
.settings(commonSettings: _*)
244243
.settings(name := "tranquility-samza")
245244
.settings(libraryDependencies ++= (samzaDependencies ++ samzaTestDependencies))
246-
// don't compile or publish for Scala > 2.10
247-
.settings((skip in compile) := scalaVersion { sv => !sv.startsWith("2.10.") }.value)
248-
.settings((skip in test) := scalaVersion { sv => !sv.startsWith("2.10.") }.value)
249-
.settings(publishArtifact <<= scalaVersion { sv => sv.startsWith("2.10.") })
250245
.settings(publishArtifact in Test := false)
251246
.dependsOn(core % "test->test;compile->compile")
252247

core/src/main/scala/com/metamx/tranquility/beam/BeamMaker.scala

+1-1
Original file line numberDiff line numberDiff line change
@@ -18,8 +18,8 @@
1818
*/
1919
package com.metamx.tranquility.beam
2020

21+
import com.github.nscala_time.time.Imports._
2122
import com.metamx.common.scala.untyped._
22-
import org.scala_tools.time.Imports._
2323

2424
/**
2525
* Makes beams for particular intervals and partition numbers. Can also serialize and deserialize beam representations.

core/src/main/scala/com/metamx/tranquility/beam/ClusteredBeam.scala

+25-29
Original file line numberDiff line numberDiff line change
@@ -19,6 +19,7 @@
1919
package com.metamx.tranquility.beam
2020

2121
import com.fasterxml.jackson.databind.ObjectMapper
22+
import com.github.nscala_time.time.Imports._
2223
import com.google.common.util.concurrent.ThreadFactoryBuilder
2324
import com.metamx.common.scala.Logging
2425
import com.metamx.common.scala.Predef._
@@ -30,22 +31,17 @@ import com.metamx.common.scala.timekeeper.Timekeeper
3031
import com.metamx.common.scala.untyped._
3132
import com.metamx.emitter.service.ServiceEmitter
3233
import com.metamx.tranquility.typeclass.Timestamper
33-
import com.twitter.util.Future
34-
import com.twitter.util.FuturePool
35-
import com.twitter.util.Promise
36-
import com.twitter.util.Return
37-
import com.twitter.util.Throw
34+
import com.twitter.util._
3835
import java.util.UUID
3936
import java.util.concurrent.Executors
4037
import java.util.concurrent.atomic.AtomicBoolean
4138
import org.apache.curator.framework.CuratorFramework
4239
import org.apache.curator.framework.recipes.locks.InterProcessSemaphoreMutex
4340
import org.apache.zookeeper.KeeperException.NodeExistsException
41+
import org.joda.time.chrono.ISOChronology
4442
import org.joda.time.DateTime
4543
import org.joda.time.DateTimeZone
4644
import org.joda.time.Interval
47-
import org.joda.time.chrono.ISOChronology
48-
import org.scala_tools.time.Implicits._
4945
import scala.collection.JavaConverters._
5046
import scala.collection.mutable
5147
import scala.language.reflectiveCalls
@@ -190,16 +186,16 @@ class ClusteredBeam[EventType: Timestamper, InnerBeamType <: Beam[EventType]](
190186
private[this] def beam(timestamp: DateTime, now: DateTime): Future[Beam[EventType]] = {
191187
val bucket = tuning.segmentBucket(timestamp)
192188
val creationInterval = new Interval(
193-
tuning.segmentBucket(now - tuning.windowPeriod).start.millis,
194-
tuning.segmentBucket(Seq(now + tuning.warmingPeriod, now + tuning.windowPeriod).maxBy(_.millis)).end.millis,
189+
tuning.segmentBucket(now - tuning.windowPeriod).start.getMillis,
190+
tuning.segmentBucket(Seq(now + tuning.warmingPeriod, now + tuning.windowPeriod).maxBy(_.getMillis)).end.getMillis,
195191
ISOChronology.getInstanceUTC
196192
)
197193
val windowInterval = new Interval(
198-
tuning.segmentBucket(now - tuning.windowPeriod).start.millis,
199-
tuning.segmentBucket(now + tuning.windowPeriod).end.millis,
194+
tuning.segmentBucket(now - tuning.windowPeriod).start.getMillis,
195+
tuning.segmentBucket(now + tuning.windowPeriod).end.getMillis,
200196
ISOChronology.getInstanceUTC
201197
)
202-
val futureBeamOption = beams.get(timestamp.millis) match {
198+
val futureBeamOption = beams.get(timestamp.getMillis) match {
203199
case _ if !open => Future.value(None)
204200
case Some(x) if windowInterval.overlaps(bucket) => Future.value(Some(x))
205201
case Some(x) => Future.value(None)
@@ -210,7 +206,7 @@ class ClusteredBeam[EventType: Timestamper, InnerBeamType <: Beam[EventType]](
210206
// This could be more efficient, but it's happening infrequently so it's probably not a big deal.
211207
data.modify {
212208
prev =>
213-
val prevBeamDicts = prev.beamDictss.getOrElse(timestamp.millis, Nil)
209+
val prevBeamDicts = prev.beamDictss.getOrElse(timestamp.getMillis, Nil)
214210
if (prevBeamDicts.size >= tuning.partitions) {
215211
log.info(
216212
"Merged beam already created for identifier[%s] timestamp[%s], with sufficient partitions (target = %d, actual = %d)",
@@ -236,8 +232,8 @@ class ClusteredBeam[EventType: Timestamper, InnerBeamType <: Beam[EventType]](
236232
val numSegmentsToCover = tuning.minSegmentsPerBeam +
237233
rand.nextInt(tuning.maxSegmentsPerBeam - tuning.minSegmentsPerBeam + 1)
238234
val intervalToCover = new Interval(
239-
timestamp.millis,
240-
tuning.segmentGranularity.increment(timestamp, numSegmentsToCover).millis,
235+
timestamp.getMillis,
236+
tuning.segmentGranularity.increment(timestamp, numSegmentsToCover).getMillis,
241237
ISOChronology.getInstanceUTC
242238
)
243239
val timestampsToCover = tuning.segmentGranularity.getIterable(intervalToCover).asScala.map(_.start)
@@ -249,7 +245,7 @@ class ClusteredBeam[EventType: Timestamper, InnerBeamType <: Beam[EventType]](
249245
// Expire old beamDicts
250246
tuning.segmentGranularity.increment(new DateTime(millis)) + tuning.windowPeriod < now
251247
}) ++ (for (ts <- timestampsToCover) yield {
252-
val tsPrevDicts = prev.beamDictss.getOrElse(ts.millis, Nil)
248+
val tsPrevDicts = prev.beamDictss.getOrElse(ts.getMillis, Nil)
253249
log.info(
254250
"Creating new merged beam for identifier[%s] timestamp[%s] (target = %d, actual = %d)",
255251
identifier,
@@ -272,10 +268,10 @@ class ClusteredBeam[EventType: Timestamper, InnerBeamType <: Beam[EventType]](
272268
}
273269
)
274270
})
275-
(ts.millis, tsNewDicts)
271+
(ts.getMillis, tsNewDicts)
276272
})
277273
val newLatestCloseTime = new DateTime(
278-
(Seq(prev.latestCloseTime.millis) ++ (prev.beamDictss.keySet -- newBeamDictss.keySet)).max,
274+
(Seq(prev.latestCloseTime.getMillis) ++ (prev.beamDictss.keySet -- newBeamDictss.keySet)).max,
279275
ISOChronology.getInstanceUTC
280276
)
281277
ClusteredBeamMeta(
@@ -298,12 +294,12 @@ class ClusteredBeam[EventType: Timestamper, InnerBeamType <: Beam[EventType]](
298294
// Only add the beams we actually wanted at this time. This is because there might be other beams in ZK
299295
// that we don't want to add just yet, on account of maybe they need their partitions expanded (this only
300296
// happens when they are the necessary ones).
301-
if (!beams.contains(timestamp.millis) && meta.beamDictss.contains(timestamp.millis)) {
302-
val beamDicts = meta.beamDictss(timestamp.millis)
297+
if (!beams.contains(timestamp.getMillis) && meta.beamDictss.contains(timestamp.getMillis)) {
298+
val beamDicts = meta.beamDictss(timestamp.getMillis)
303299
log.info("Adding beams for identifier[%s] timestamp[%s]: %s", identifier, timestamp, beamDicts)
304300
// Should have better handling of unparseable zk data. Changing BeamMaker implementations currently
305301
// just causes exceptions until the old dicts are cleared out.
306-
beams(timestamp.millis) = beamMergeFn(
302+
beams(timestamp.getMillis) = beamMergeFn(
307303
beamDicts.zipWithIndex map {
308304
case (beamDict, partitionNum) =>
309305
val decorate = beamDecorateFn(tuning.segmentBucket(timestamp), partitionNum)
@@ -319,7 +315,7 @@ class ClusteredBeam[EventType: Timestamper, InnerBeamType <: Beam[EventType]](
319315
beams.remove(timestamp)
320316
}
321317
// Return requested beam. It may not have actually been created, so it's an Option.
322-
beams.get(timestamp.millis) ifEmpty {
318+
beams.get(timestamp.getMillis) ifEmpty {
323319
log.info(
324320
"Turns out we decided not to actually make beams for identifier[%s] timestamp[%s]. Returning None.",
325321
identifier,
@@ -352,7 +348,7 @@ class ClusteredBeam[EventType: Timestamper, InnerBeamType <: Beam[EventType]](
352348
val grouped: Seq[(DateTime, IndexedSeq[(EventType, Promise[SendResult])])] = (eventsWithPromises groupBy {
353349
case (event, promise) =>
354350
tuning.segmentBucket(timestamper(event)).start
355-
}).toSeq.sortBy(_._1.millis)
351+
}).toSeq.sortBy(_._1.getMillis)
356352
// Possibly warm up future beams
357353
def toBeWarmed(dt: DateTime, end: DateTime): List[DateTime] = {
358354
if (dt <= end) {
@@ -362,7 +358,7 @@ class ClusteredBeam[EventType: Timestamper, InnerBeamType <: Beam[EventType]](
362358
}
363359
}
364360
val latestEventTimestamp: Option[DateTime] = grouped.lastOption map { case (truncatedTimestamp, group) =>
365-
val event: EventType = group.maxBy(tuple => timestamper(tuple._1).millis)._1
361+
val event: EventType = group.maxBy(tuple => timestamper(tuple._1).getMillis)._1
366362
timestamper(event)
367363
}
368364
val warmingBeams: Future[Seq[Beam[EventType]]] = Future.collect(
@@ -406,13 +402,13 @@ class ClusteredBeam[EventType: Timestamper, InnerBeamType <: Beam[EventType]](
406402
data.modify {
407403
prev =>
408404
ClusteredBeamMeta(
409-
Seq(prev.latestCloseTime, timestamp).maxBy(_.millis),
410-
prev.beamDictss - timestamp.millis
405+
Seq(prev.latestCloseTime, timestamp).maxBy(_.getMillis),
406+
prev.beamDictss - timestamp.getMillis
411407
)
412408
} onSuccess {
413409
meta =>
414410
beamWriteMonitor.synchronized {
415-
beams.remove(timestamp.millis)
411+
beams.remove(timestamp.getMillis)
416412
}
417413
} map (_ => SendResult.Dropped)
418414
} else {
@@ -479,7 +475,7 @@ case class ClusteredBeamMeta(latestCloseTime: DateTime, beamDictss: Map[Long, Se
479475
Dict(
480476
// latestTime is only being written for backwards compatibility
481477
"latestTime" -> new DateTime(
482-
(Seq(latestCloseTime.millis) ++ beamDictss.map(_._1)).max,
478+
(Seq(latestCloseTime.getMillis) ++ beamDictss.map(_._1)).max,
483479
ISOChronology.getInstanceUTC
484480
).toString(),
485481
"latestCloseTime" -> latestCloseTime.toString(),
@@ -499,7 +495,7 @@ object ClusteredBeamMeta
499495
case (k, vs) =>
500496
val ts = new DateTime(k, ISOChronology.getInstanceUTC)
501497
val beamDicts = list(vs) map (dict(_))
502-
(ts.millis, beamDicts)
498+
(ts.getMillis, beamDicts)
503499
}
504500
val latestCloseTime = new DateTime(d.getOrElse("latestCloseTime", 0L), ISOChronology.getInstanceUTC)
505501
Right(ClusteredBeamMeta(latestCloseTime, beams))

0 commit comments

Comments
 (0)