https://dblp.org/rdf/schema#authoredBy
|
https://dblp.org/pid/122/3593 +
, https://dblp.org/pid/06/4240 +
, https://dblp.org/pid/163/6632 +
, https://dblp.org/pid/277/0616 +
, https://dblp.org/pid/170/1608 +
, https://dblp.org/pid/98/3076 +
, https://dblp.org/pid/54/4989 +
, https://dblp.org/pid/33/4058 +
, https://dblp.org/pid/00/5012 +
, https://dblp.org/pid/q/YuQiao1 +
, https://dblp.org/pid/14/9399 +
|
https://dblp.org/rdf/schema#bibtexType
|
http://purl.org/net/nknouf/ns/bibtex#Inproceedings +
|
https://dblp.org/rdf/schema#createdBy
|
https://dblp.org/pid/122/3593 +
, https://dblp.org/pid/06/4240 +
, https://dblp.org/pid/163/6632 +
, https://dblp.org/pid/277/0616 +
, https://dblp.org/pid/170/1608 +
, https://dblp.org/pid/98/3076 +
, https://dblp.org/pid/54/4989 +
, https://dblp.org/pid/33/4058 +
, https://dblp.org/pid/00/5012 +
, https://dblp.org/pid/q/YuQiao1 +
, https://dblp.org/pid/14/9399 +
|
https://dblp.org/rdf/schema#documentPage
|
http://papers.nips.cc/paper_files/paper/2023/hash/c1f7b1ed763e9c75e4db74b49b76db5f-Abstract-Conference.html +
|
https://dblp.org/rdf/schema#listedOnTocPage
|
https://dblp.org/db/conf/nips/neurips2023 +
|
https://dblp.org/rdf/schema#numberOfCreators
|
11
|
https://dblp.org/rdf/schema#primaryDocumentPage
|
http://papers.nips.cc/paper_files/paper/2023/hash/c1f7b1ed763e9c75e4db74b49b76db5f-Abstract-Conference.html +
|
https://dblp.org/rdf/schema#publishedAsPartOf
|
https://dblp.org/rec/conf/nips/2023 +
|
https://dblp.org/rdf/schema#publishedIn
|
NeurIPS
|
https://dblp.org/rdf/schema#publishedInBook
|
NeurIPS
|
https://dblp.org/rdf/schema#publishedInStream
|
https://dblp.org/streams/conf/nips +
|
https://dblp.org/rdf/schema#title
|
VisionLLM: Large Language Model is also an Open-Ended Decoder for Vision-Centric Tasks.
|
https://dblp.org/rdf/schema#yearOfEvent
|
2023
|
https://dblp.org/rdf/schema#yearOfPublication
|
2023
|
rdf:type |
https://dblp.org/rdf/schema#Publication +
, https://dblp.org/rdf/schema#Inproceedings +
|
rdfs:label |
Wenhai Wang et al.: VisionLLM: Large Language Model is also an Open-Ended Decoder for Vision-Centric Tasks. (2023)
|