This HTML5 document contains 33 embedded RDF statements represented using HTML+Microdata notation.

The embedded RDF content will be recognized by any processor of HTML5 Microdata.

Namespace Prefixes

PrefixIRI
dctermshttp://purl.org/dc/terms/
dbohttp://dbpedia.org/ontology/
foafhttp://xmlns.com/foaf/0.1/
n5https://global.dbpedia.org/id/
dbthttp://dbpedia.org/resource/Template:
rdfshttp://www.w3.org/2000/01/rdf-schema#
freebasehttp://rdf.freebase.com/ns/
rdfhttp://www.w3.org/1999/02/22-rdf-syntax-ns#
owlhttp://www.w3.org/2002/07/owl#
wikipedia-enhttp://en.wikipedia.org/wiki/
provhttp://www.w3.org/ns/prov#
dbphttp://dbpedia.org/property/
dbchttp://dbpedia.org/resource/Category:
xsdhhttp://www.w3.org/2001/XMLSchema#
wikidatahttp://www.wikidata.org/entity/
goldhttp://purl.org/linguistics/gold/
dbrhttp://dbpedia.org/resource/

Statements

Subject Item
dbr:Rumelhart_Prize
dbo:wikiPageWikiLink
dbr:Visual_routine
Subject Item
dbr:Perception
dbo:wikiPageWikiLink
dbr:Visual_routine
Subject Item
dbr:Routine
dbo:wikiPageWikiLink
dbr:Visual_routine
dbo:wikiPageDisambiguates
dbr:Visual_routine
Subject Item
dbr:Visual_routine
rdf:type
dbo:Software
rdfs:label
Visual routine
rdfs:comment
A visual routine is a means of extracting information from a visual scene. Shimon Ullman, in his studies on human visual cognition, proposed that the human visual system's task of perceiving shape properties and spatial relations is split into two successive stages: an early "bottom-up" state during which base representations are generated from the visual input, and a later "top-down" stage during which high-level primitives dubbed "visual routines" extract the desired information from the base representations. In humans, the base representations generated during the bottom-up stage correspond to (more than 15 of which exist in the cortex) for properties like color, edge orientation, speed of motion, and direction of motion. These base representations rely on fixed operations performed un
dcterms:subject
dbc:Visual_system
dbo:wikiPageID
615555
dbo:wikiPageRevisionID
1069018544
dbo:wikiPageWikiLink
dbr:Object_recognition dbr:Occultation dbr:Visual_system dbr:Retinotopic_map dbr:Video_game dbc:Visual_system dbr:Top-down_and_bottom-up_design dbr:Shimon_Ullman dbr:Visual_field dbr:Cognition
owl:sameAs
n5:4xn2f freebase:m.02wwcn wikidata:Q7936621
dbp:wikiPageUsesTemplate
dbt:Reflist dbt:Short_description
dbo:abstract
A visual routine is a means of extracting information from a visual scene. Shimon Ullman, in his studies on human visual cognition, proposed that the human visual system's task of perceiving shape properties and spatial relations is split into two successive stages: an early "bottom-up" state during which base representations are generated from the visual input, and a later "top-down" stage during which high-level primitives dubbed "visual routines" extract the desired information from the base representations. In humans, the base representations generated during the bottom-up stage correspond to (more than 15 of which exist in the cortex) for properties like color, edge orientation, speed of motion, and direction of motion. These base representations rely on fixed operations performed uniformly over the entire field of visual input, and do not make use of object-specific knowledge, task-specific knowledge, or other higher-level information. The visual routines proposed by Ullman are high-level primitives which parse the structure of a scene, extracting spatial information from the base representations. These visual routines are composed of a sequence of elementary visual operators specific to the task at hand. Visual routines differ from the fixed operations of the base representations in that they are not applied uniformly over the entire visual field --- rather, they are only applied to objects or areas specified by the routines. Ullman lists the following as examples of visual operators: shifting the processing focus, indexing a salient item for further processing, spreading activation over an area delimited by boundaries, tracing boundaries, and marking a location or object for future reference. When combined into visual routines, these elementary operators can be used to perform relatively sophisticated spatial tasks such as counting the number of objects satisfying a certain property, or recognizing a complex shape. A number of researchers have implemented visual routines for processing camera images, to perform tasks like determining the object a human in the camera image is pointing at. Researchers have also applied the visual routines approach to artificial map representations, for playing real-time 2D video games. In those cases, however, the map of the video game was provided directly, alleviating the need to deal with real-world perceptual tasks like object recognition and occlusion compensation.
gold:hypernym
dbr:Means
prov:wasDerivedFrom
wikipedia-en:Visual_routine?oldid=1069018544&ns=0
dbo:wikiPageLength
4128
foaf:isPrimaryTopicOf
wikipedia-en:Visual_routine
Subject Item
dbr:Visual_routines
dbo:wikiPageWikiLink
dbr:Visual_routine
dbo:wikiPageRedirects
dbr:Visual_routine
Subject Item
wikipedia-en:Visual_routine
foaf:primaryTopic
dbr:Visual_routine