Spark No Encoder found for java.io.Serializable in Map[String, java.io.Serializable]
I am writing a spark job that the dataset is pretty flexible, it's defined as Dataset[Map[String, java.io.Serializable]]
.
now the problem start to show up, spark runtime complains about No Encoder found for java.io.Serializable
. I've tried kyro serde, still showing the same error message.
the reason why I have to use this weird Dataset type is because I have flexible fields per Row. and the map looks like:
Map(
"a" -> 1,
"b" -> "bbb",
"c" -> 0.1,
...
)
is there anyway in Spark to handle this flexible dataset type?
EDIT:
here is the solid code anyone can try.
import org.apache.spark.sql.{Dataset, SparkSession}
object SerdeTest extends App {
val sparkSession: SparkSession = SparkSession
.builder()
.master("local[2]")
.getOrCreate()
import sparkSession.implicits._
val ret: Dataset[Record] = sparkSession.sparkContext.parallelize(0 to 10)
.map(
t => {
val row = (0 to t).map(
i => i -> i.asInstanceOf[Integer]
).toMap
Record(map = row)
}
).toDS()
val repartitioned = ret.repartition(10)
repartitioned.collect.foreach(println)
}
case class Record (
map: Map[Int, java.io.Serializable]
)
the above code will give you error Encoder not found:
Exception in thread "main" java.lang.UnsupportedOperationException: No Encoder found for java.io.Serializable
- map value class: "java.io.Serializable"
- field (class: "scala.collection.immutable.Map", name: "map")
apache-spark
add a comment |
I am writing a spark job that the dataset is pretty flexible, it's defined as Dataset[Map[String, java.io.Serializable]]
.
now the problem start to show up, spark runtime complains about No Encoder found for java.io.Serializable
. I've tried kyro serde, still showing the same error message.
the reason why I have to use this weird Dataset type is because I have flexible fields per Row. and the map looks like:
Map(
"a" -> 1,
"b" -> "bbb",
"c" -> 0.1,
...
)
is there anyway in Spark to handle this flexible dataset type?
EDIT:
here is the solid code anyone can try.
import org.apache.spark.sql.{Dataset, SparkSession}
object SerdeTest extends App {
val sparkSession: SparkSession = SparkSession
.builder()
.master("local[2]")
.getOrCreate()
import sparkSession.implicits._
val ret: Dataset[Record] = sparkSession.sparkContext.parallelize(0 to 10)
.map(
t => {
val row = (0 to t).map(
i => i -> i.asInstanceOf[Integer]
).toMap
Record(map = row)
}
).toDS()
val repartitioned = ret.repartition(10)
repartitioned.collect.foreach(println)
}
case class Record (
map: Map[Int, java.io.Serializable]
)
the above code will give you error Encoder not found:
Exception in thread "main" java.lang.UnsupportedOperationException: No Encoder found for java.io.Serializable
- map value class: "java.io.Serializable"
- field (class: "scala.collection.immutable.Map", name: "map")
apache-spark
pls show your code
– thebluephantom
Nov 16 '18 at 21:13
@thebluephantom added code, can directly run in intelliJ.
– linehrr
Nov 17 '18 at 23:14
I am not sure I followvwhat the question is answer.
– thebluephantom
Nov 18 '18 at 7:51
add a comment |
I am writing a spark job that the dataset is pretty flexible, it's defined as Dataset[Map[String, java.io.Serializable]]
.
now the problem start to show up, spark runtime complains about No Encoder found for java.io.Serializable
. I've tried kyro serde, still showing the same error message.
the reason why I have to use this weird Dataset type is because I have flexible fields per Row. and the map looks like:
Map(
"a" -> 1,
"b" -> "bbb",
"c" -> 0.1,
...
)
is there anyway in Spark to handle this flexible dataset type?
EDIT:
here is the solid code anyone can try.
import org.apache.spark.sql.{Dataset, SparkSession}
object SerdeTest extends App {
val sparkSession: SparkSession = SparkSession
.builder()
.master("local[2]")
.getOrCreate()
import sparkSession.implicits._
val ret: Dataset[Record] = sparkSession.sparkContext.parallelize(0 to 10)
.map(
t => {
val row = (0 to t).map(
i => i -> i.asInstanceOf[Integer]
).toMap
Record(map = row)
}
).toDS()
val repartitioned = ret.repartition(10)
repartitioned.collect.foreach(println)
}
case class Record (
map: Map[Int, java.io.Serializable]
)
the above code will give you error Encoder not found:
Exception in thread "main" java.lang.UnsupportedOperationException: No Encoder found for java.io.Serializable
- map value class: "java.io.Serializable"
- field (class: "scala.collection.immutable.Map", name: "map")
apache-spark
I am writing a spark job that the dataset is pretty flexible, it's defined as Dataset[Map[String, java.io.Serializable]]
.
now the problem start to show up, spark runtime complains about No Encoder found for java.io.Serializable
. I've tried kyro serde, still showing the same error message.
the reason why I have to use this weird Dataset type is because I have flexible fields per Row. and the map looks like:
Map(
"a" -> 1,
"b" -> "bbb",
"c" -> 0.1,
...
)
is there anyway in Spark to handle this flexible dataset type?
EDIT:
here is the solid code anyone can try.
import org.apache.spark.sql.{Dataset, SparkSession}
object SerdeTest extends App {
val sparkSession: SparkSession = SparkSession
.builder()
.master("local[2]")
.getOrCreate()
import sparkSession.implicits._
val ret: Dataset[Record] = sparkSession.sparkContext.parallelize(0 to 10)
.map(
t => {
val row = (0 to t).map(
i => i -> i.asInstanceOf[Integer]
).toMap
Record(map = row)
}
).toDS()
val repartitioned = ret.repartition(10)
repartitioned.collect.foreach(println)
}
case class Record (
map: Map[Int, java.io.Serializable]
)
the above code will give you error Encoder not found:
Exception in thread "main" java.lang.UnsupportedOperationException: No Encoder found for java.io.Serializable
- map value class: "java.io.Serializable"
- field (class: "scala.collection.immutable.Map", name: "map")
apache-spark
apache-spark
edited Nov 17 '18 at 23:53
linehrr
asked Nov 15 '18 at 23:12
linehrrlinehrr
534314
534314
pls show your code
– thebluephantom
Nov 16 '18 at 21:13
@thebluephantom added code, can directly run in intelliJ.
– linehrr
Nov 17 '18 at 23:14
I am not sure I followvwhat the question is answer.
– thebluephantom
Nov 18 '18 at 7:51
add a comment |
pls show your code
– thebluephantom
Nov 16 '18 at 21:13
@thebluephantom added code, can directly run in intelliJ.
– linehrr
Nov 17 '18 at 23:14
I am not sure I followvwhat the question is answer.
– thebluephantom
Nov 18 '18 at 7:51
pls show your code
– thebluephantom
Nov 16 '18 at 21:13
pls show your code
– thebluephantom
Nov 16 '18 at 21:13
@thebluephantom added code, can directly run in intelliJ.
– linehrr
Nov 17 '18 at 23:14
@thebluephantom added code, can directly run in intelliJ.
– linehrr
Nov 17 '18 at 23:14
I am not sure I followvwhat the question is answer.
– thebluephantom
Nov 18 '18 at 7:51
I am not sure I followvwhat the question is answer.
– thebluephantom
Nov 18 '18 at 7:51
add a comment |
1 Answer
1
active
oldest
votes
found the answer, one way to solve this is to use Kyro serde framework, code change is very minimum, just need to make an implicit Encoder using Kyro and bring that into the context whenever serialization is needed.
here is the code example I got working(can directly run in IntelliJ or equivalent IDE):
import org.apache.spark.sql._
object SerdeTest extends App {
val sparkSession: SparkSession = SparkSession
.builder()
.master("local[2]")
.getOrCreate()
import sparkSession.implicits._
// here is the place you define your Encoder for your custom object type, like in this case Map[Int, java.io.Serializable]
implicit val myObjEncoder: Encoder[Record] = org.apache.spark.sql.Encoders.kryo[Record]
val ret: Dataset[Record] = sparkSession.sparkContext.parallelize(0 to 10)
.map(
t => {
val row = (0 to t).map(
i => i -> i.asInstanceOf[Integer]
).toMap
Record(map = row)
}
).toDS()
val repartitioned = ret.repartition(10)
repartitioned.collect.foreach(
row => println(row.map)
)
}
case class Record (
map: Map[Int, java.io.Serializable]
)
this code will produce the expected results:
Map(0 -> 0, 5 -> 5, 1 -> 1, 2 -> 2, 3 -> 3, 4 -> 4)
Map(0 -> 0, 1 -> 1, 2 -> 2)
Map(0 -> 0, 5 -> 5, 1 -> 1, 6 -> 6, 2 -> 2, 7 -> 7, 3 -> 3, 4 -> 4)
Map(0 -> 0, 1 -> 1)
Map(0 -> 0, 1 -> 1, 2 -> 2, 3 -> 3, 4 -> 4)
Map(0 -> 0, 1 -> 1, 2 -> 2, 3 -> 3)
Map(0 -> 0)
Map(0 -> 0, 5 -> 5, 1 -> 1, 6 -> 6, 2 -> 2, 3 -> 3, 4 -> 4)
Map(0 -> 0, 5 -> 5, 10 -> 10, 1 -> 1, 6 -> 6, 9 -> 9, 2 -> 2, 7 -> 7, 3 -> 3, 8 -> 8, 4 -> 4)
Map(0 -> 0, 5 -> 5, 1 -> 1, 6 -> 6, 9 -> 9, 2 -> 2, 7 -> 7, 3 -> 3, 8 -> 8, 4 -> 4)
Map(0 -> 0, 5 -> 5, 1 -> 1, 6 -> 6, 2 -> 2, 7 -> 7, 3 -> 3, 8 -> 8, 4 -> 4)
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53329178%2fspark-no-encoder-found-for-java-io-serializable-in-mapstring-java-io-serializa%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
found the answer, one way to solve this is to use Kyro serde framework, code change is very minimum, just need to make an implicit Encoder using Kyro and bring that into the context whenever serialization is needed.
here is the code example I got working(can directly run in IntelliJ or equivalent IDE):
import org.apache.spark.sql._
object SerdeTest extends App {
val sparkSession: SparkSession = SparkSession
.builder()
.master("local[2]")
.getOrCreate()
import sparkSession.implicits._
// here is the place you define your Encoder for your custom object type, like in this case Map[Int, java.io.Serializable]
implicit val myObjEncoder: Encoder[Record] = org.apache.spark.sql.Encoders.kryo[Record]
val ret: Dataset[Record] = sparkSession.sparkContext.parallelize(0 to 10)
.map(
t => {
val row = (0 to t).map(
i => i -> i.asInstanceOf[Integer]
).toMap
Record(map = row)
}
).toDS()
val repartitioned = ret.repartition(10)
repartitioned.collect.foreach(
row => println(row.map)
)
}
case class Record (
map: Map[Int, java.io.Serializable]
)
this code will produce the expected results:
Map(0 -> 0, 5 -> 5, 1 -> 1, 2 -> 2, 3 -> 3, 4 -> 4)
Map(0 -> 0, 1 -> 1, 2 -> 2)
Map(0 -> 0, 5 -> 5, 1 -> 1, 6 -> 6, 2 -> 2, 7 -> 7, 3 -> 3, 4 -> 4)
Map(0 -> 0, 1 -> 1)
Map(0 -> 0, 1 -> 1, 2 -> 2, 3 -> 3, 4 -> 4)
Map(0 -> 0, 1 -> 1, 2 -> 2, 3 -> 3)
Map(0 -> 0)
Map(0 -> 0, 5 -> 5, 1 -> 1, 6 -> 6, 2 -> 2, 3 -> 3, 4 -> 4)
Map(0 -> 0, 5 -> 5, 10 -> 10, 1 -> 1, 6 -> 6, 9 -> 9, 2 -> 2, 7 -> 7, 3 -> 3, 8 -> 8, 4 -> 4)
Map(0 -> 0, 5 -> 5, 1 -> 1, 6 -> 6, 9 -> 9, 2 -> 2, 7 -> 7, 3 -> 3, 8 -> 8, 4 -> 4)
Map(0 -> 0, 5 -> 5, 1 -> 1, 6 -> 6, 2 -> 2, 7 -> 7, 3 -> 3, 8 -> 8, 4 -> 4)
add a comment |
found the answer, one way to solve this is to use Kyro serde framework, code change is very minimum, just need to make an implicit Encoder using Kyro and bring that into the context whenever serialization is needed.
here is the code example I got working(can directly run in IntelliJ or equivalent IDE):
import org.apache.spark.sql._
object SerdeTest extends App {
val sparkSession: SparkSession = SparkSession
.builder()
.master("local[2]")
.getOrCreate()
import sparkSession.implicits._
// here is the place you define your Encoder for your custom object type, like in this case Map[Int, java.io.Serializable]
implicit val myObjEncoder: Encoder[Record] = org.apache.spark.sql.Encoders.kryo[Record]
val ret: Dataset[Record] = sparkSession.sparkContext.parallelize(0 to 10)
.map(
t => {
val row = (0 to t).map(
i => i -> i.asInstanceOf[Integer]
).toMap
Record(map = row)
}
).toDS()
val repartitioned = ret.repartition(10)
repartitioned.collect.foreach(
row => println(row.map)
)
}
case class Record (
map: Map[Int, java.io.Serializable]
)
this code will produce the expected results:
Map(0 -> 0, 5 -> 5, 1 -> 1, 2 -> 2, 3 -> 3, 4 -> 4)
Map(0 -> 0, 1 -> 1, 2 -> 2)
Map(0 -> 0, 5 -> 5, 1 -> 1, 6 -> 6, 2 -> 2, 7 -> 7, 3 -> 3, 4 -> 4)
Map(0 -> 0, 1 -> 1)
Map(0 -> 0, 1 -> 1, 2 -> 2, 3 -> 3, 4 -> 4)
Map(0 -> 0, 1 -> 1, 2 -> 2, 3 -> 3)
Map(0 -> 0)
Map(0 -> 0, 5 -> 5, 1 -> 1, 6 -> 6, 2 -> 2, 3 -> 3, 4 -> 4)
Map(0 -> 0, 5 -> 5, 10 -> 10, 1 -> 1, 6 -> 6, 9 -> 9, 2 -> 2, 7 -> 7, 3 -> 3, 8 -> 8, 4 -> 4)
Map(0 -> 0, 5 -> 5, 1 -> 1, 6 -> 6, 9 -> 9, 2 -> 2, 7 -> 7, 3 -> 3, 8 -> 8, 4 -> 4)
Map(0 -> 0, 5 -> 5, 1 -> 1, 6 -> 6, 2 -> 2, 7 -> 7, 3 -> 3, 8 -> 8, 4 -> 4)
add a comment |
found the answer, one way to solve this is to use Kyro serde framework, code change is very minimum, just need to make an implicit Encoder using Kyro and bring that into the context whenever serialization is needed.
here is the code example I got working(can directly run in IntelliJ or equivalent IDE):
import org.apache.spark.sql._
object SerdeTest extends App {
val sparkSession: SparkSession = SparkSession
.builder()
.master("local[2]")
.getOrCreate()
import sparkSession.implicits._
// here is the place you define your Encoder for your custom object type, like in this case Map[Int, java.io.Serializable]
implicit val myObjEncoder: Encoder[Record] = org.apache.spark.sql.Encoders.kryo[Record]
val ret: Dataset[Record] = sparkSession.sparkContext.parallelize(0 to 10)
.map(
t => {
val row = (0 to t).map(
i => i -> i.asInstanceOf[Integer]
).toMap
Record(map = row)
}
).toDS()
val repartitioned = ret.repartition(10)
repartitioned.collect.foreach(
row => println(row.map)
)
}
case class Record (
map: Map[Int, java.io.Serializable]
)
this code will produce the expected results:
Map(0 -> 0, 5 -> 5, 1 -> 1, 2 -> 2, 3 -> 3, 4 -> 4)
Map(0 -> 0, 1 -> 1, 2 -> 2)
Map(0 -> 0, 5 -> 5, 1 -> 1, 6 -> 6, 2 -> 2, 7 -> 7, 3 -> 3, 4 -> 4)
Map(0 -> 0, 1 -> 1)
Map(0 -> 0, 1 -> 1, 2 -> 2, 3 -> 3, 4 -> 4)
Map(0 -> 0, 1 -> 1, 2 -> 2, 3 -> 3)
Map(0 -> 0)
Map(0 -> 0, 5 -> 5, 1 -> 1, 6 -> 6, 2 -> 2, 3 -> 3, 4 -> 4)
Map(0 -> 0, 5 -> 5, 10 -> 10, 1 -> 1, 6 -> 6, 9 -> 9, 2 -> 2, 7 -> 7, 3 -> 3, 8 -> 8, 4 -> 4)
Map(0 -> 0, 5 -> 5, 1 -> 1, 6 -> 6, 9 -> 9, 2 -> 2, 7 -> 7, 3 -> 3, 8 -> 8, 4 -> 4)
Map(0 -> 0, 5 -> 5, 1 -> 1, 6 -> 6, 2 -> 2, 7 -> 7, 3 -> 3, 8 -> 8, 4 -> 4)
found the answer, one way to solve this is to use Kyro serde framework, code change is very minimum, just need to make an implicit Encoder using Kyro and bring that into the context whenever serialization is needed.
here is the code example I got working(can directly run in IntelliJ or equivalent IDE):
import org.apache.spark.sql._
object SerdeTest extends App {
val sparkSession: SparkSession = SparkSession
.builder()
.master("local[2]")
.getOrCreate()
import sparkSession.implicits._
// here is the place you define your Encoder for your custom object type, like in this case Map[Int, java.io.Serializable]
implicit val myObjEncoder: Encoder[Record] = org.apache.spark.sql.Encoders.kryo[Record]
val ret: Dataset[Record] = sparkSession.sparkContext.parallelize(0 to 10)
.map(
t => {
val row = (0 to t).map(
i => i -> i.asInstanceOf[Integer]
).toMap
Record(map = row)
}
).toDS()
val repartitioned = ret.repartition(10)
repartitioned.collect.foreach(
row => println(row.map)
)
}
case class Record (
map: Map[Int, java.io.Serializable]
)
this code will produce the expected results:
Map(0 -> 0, 5 -> 5, 1 -> 1, 2 -> 2, 3 -> 3, 4 -> 4)
Map(0 -> 0, 1 -> 1, 2 -> 2)
Map(0 -> 0, 5 -> 5, 1 -> 1, 6 -> 6, 2 -> 2, 7 -> 7, 3 -> 3, 4 -> 4)
Map(0 -> 0, 1 -> 1)
Map(0 -> 0, 1 -> 1, 2 -> 2, 3 -> 3, 4 -> 4)
Map(0 -> 0, 1 -> 1, 2 -> 2, 3 -> 3)
Map(0 -> 0)
Map(0 -> 0, 5 -> 5, 1 -> 1, 6 -> 6, 2 -> 2, 3 -> 3, 4 -> 4)
Map(0 -> 0, 5 -> 5, 10 -> 10, 1 -> 1, 6 -> 6, 9 -> 9, 2 -> 2, 7 -> 7, 3 -> 3, 8 -> 8, 4 -> 4)
Map(0 -> 0, 5 -> 5, 1 -> 1, 6 -> 6, 9 -> 9, 2 -> 2, 7 -> 7, 3 -> 3, 8 -> 8, 4 -> 4)
Map(0 -> 0, 5 -> 5, 1 -> 1, 6 -> 6, 2 -> 2, 7 -> 7, 3 -> 3, 8 -> 8, 4 -> 4)
edited Nov 17 '18 at 23:51
answered Nov 17 '18 at 23:35
linehrrlinehrr
534314
534314
add a comment |
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53329178%2fspark-no-encoder-found-for-java-io-serializable-in-mapstring-java-io-serializa%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
pls show your code
– thebluephantom
Nov 16 '18 at 21:13
@thebluephantom added code, can directly run in intelliJ.
– linehrr
Nov 17 '18 at 23:14
I am not sure I followvwhat the question is answer.
– thebluephantom
Nov 18 '18 at 7:51