Indoor and outdoor accurate 3D mapping is a relevant resource for a diversity of applications. This paper describes an autonomous platform capable of generating 3D imagery of the environment in unknown indoor and outdoor contexts. The system is composed by a number of Data Fusion processes that are performed in real-time by on-board and/or off-board processing nodes. The platform`s sensing capabilities are composed of multiple laser scanners for 2D and 3D perception, IMU units, 3D cameras (indoor Kinect), standard cameras, GPS (for outdoor operation) and dead reckoning sensors. The acquired data is shared with multiple client processes that are in charge of different levels of perception and control. The resulting data, produced by the perception processes is also shared for being used by higher-level processes such as the 3D mapping, generation of maps of diverse dense properties, detection and classifications of obstacles and other context features that are application specific.