Search Results

Search found 36 results on 2 pages for 'bson'.

Page 1/2 | 1 2  | Next Page >

  • Metsys.Bson - the BSON Library

    Earlier this month I detailed the implementation of the bson serialization we used in Norm - the C# MongoDB driver. I've since extracted the serialization/deserialization code and created a standalone project for it - in the hopes that it might prove helpful to someone. If you need an efficient binary protocol to transfer data, look no further. There are two methods you need to be aware of: Serializer.Serialize and Deserializer.Deserialize. User u1 = new User{...}; byte[] bytes = Serializer.Serialize(u1); User...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Facebook user_id as MongoDB BSON ObjectId?

    - by MattDiPasquale
    I'm rebuilding Lovers on Facebook with Sinatra & Redis. I like Redis because it doesn't have the long (12-byte) BSON ObjectIds and I am storing sets of Facebook user_ids for each user. The sets are requests_sent, requests_received, & relationships, and they all contain Facebook user ids. I'm thinking of switching to MongoDB because I want to use it's geospatial indexing. If I do, I'd want to use the FB user ids as the _id field because I want the sets to be small and I want the JSON responses to be small. But, is the BSON ObjectId better (more efficient for MongoDB) to use than just an integer (fb user_id)?

    Read the article

  • BSON Serialization

    BSON is a binary-encoded serialization of JSON-like documents, which essentially means its an efficient way of transfering information. Part of my work on the MongoDB NoRM drivers, discussed in more details by Rob Conery, is to write an efficient and maintainable BSON serializer and deserializer. The goal of the serializer is that you give it a .NET object and you get a byte array out of it which represents valid BSON. The deserializer does the opposite - give it a byte array and out pops your object....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • stored as array understanding mongoid

    - by Gagan
    Hello frens, This is not a problem, but I just want to know stored as array of Mongoid better. I have following code in my model. class Company include Mongoid::Document include Mongoid::Timestamps references_many :people, :stored_as => :array, :inverse_of => :companies end class Person include Mongoid::Document include Sunspot::Mongoid references_many :companies, :stored_as => :array, :inverse_of => :people end Now in Company object we get person_ids as a result of stored_as array and company_ids in Person object. Now initially I inserted lots of person in company and the ids in person_ids fields is huge. Now I deleted most of person from company down to 8 people. Now I don't get why person_ids fields of Company object storing all the deleted ids of person. My console snapshot is follwing ruby-1.9.2-head Company.first.person_ids = [BSON::ObjectId('4d12d2907adf350695000025'), BSON::ObjectId('4d12d2907adf35069500002c'), BSON::ObjectId('4d12d2907adf350695000035'), BSON::ObjectId('4d12d2907adf35069500003f'), BSON::ObjectId('4d12d2907adf350695000048'), BSON::ObjectId('4d12d2907adf350695000052'), BSON::ObjectId('4d12d2907adf350695000059'), BSON::ObjectId('4d12d2907adf350695000062'), BSON::ObjectId('4d12d4017adf35069500008d'), BSON::ObjectId('4d12d4017adf350695000094'), BSON::ObjectId('4d12d4017adf35069500009d'), BSON::ObjectId('4d12d4017adf3506950000a7'), BSON::ObjectId('4d12d4017adf3506950000b0'), BSON::ObjectId('4d12d4017adf3506950000ba'), BSON::ObjectId('4d12d4017adf3506950000c1'), BSON::ObjectId('4d12d4017adf3506950000ca'), BSON::ObjectId('4d12d48a7adf3506950000f5'), BSON::ObjectId('4d12d48a7adf3506950000fc'), BSON::ObjectId('4d12d48a7adf350695000108'), BSON::ObjectId('4d12d48b7adf350695000115'), BSON::ObjectId('4d12d48b7adf350695000121'), BSON::ObjectId('4d12d48b7adf35069500012e'), BSON::ObjectId('4d12d48b7adf350695000135'), BSON::ObjectId('4d12d48b7adf350695000141'), BSON::ObjectId('4d12d53e7adf35069500016f'), BSON::ObjectId('4d12d53e7adf350695000176'), BSON::ObjectId('4d12d53e7adf350695000182'), BSON::ObjectId('4d12d53e7adf35069500018f'), BSON::ObjectId('4d12d53e7adf35069500019b'), BSON::ObjectId('4d12d53f7adf3506950001a8'), BSON::ObjectId('4d12d53f7adf3506950001af'), BSON::ObjectId('4d12d53f7adf3506950001bb'), BSON::ObjectId('4d12d8587adf3506950001e9'), BSON::ObjectId('4d12d8587adf3506950001f0'), BSON::ObjectId('4d12d8587adf3506950001ff'), BSON::ObjectId('4d12d8597adf35069500020f'), BSON::ObjectId('4d12d8597adf35069500021e'), BSON::ObjectId('4d12d8597adf35069500022e'), BSON::ObjectId('4d12d8597adf350695000235'), BSON::ObjectId('4d12d85a7adf350695000244'), BSON::ObjectId('4d12d9587adf35069500025b'), BSON::ObjectId('4d12db8b7adf35069500026a'), BSON::ObjectId('4d12de6f7adf3509c9000024'), BSON::ObjectId('4d12de6f7adf3509c900002b'), BSON::ObjectId('4d12de6f7adf3509c900003a'), BSON::ObjectId('4d12de707adf3509c900004a'), BSON::ObjectId('4d12de707adf3509c9000059'), BSON::ObjectId('4d12de707adf3509c9000069'), BSON::ObjectId('4d12de707adf3509c9000070'), BSON::ObjectId('4d12de717adf3509c900007f'), BSON::ObjectId('4d12e7f27adf350bd2000009'), BSON::ObjectId('4d12e81f7adf350bd2000015'), BSON::ObjectId('4d12e87f7adf350bd2000024'), BSON::ObjectId('4d12e8b87adf350bd200004c'), BSON::ObjectId('4d12e8b97adf350bd2000053'), BSON::ObjectId('4d12e8b97adf350bd200005c'), BSON::ObjectId('4d12e8b97adf350bd2000066'), BSON::ObjectId('4d12e8b97adf350bd200006f'), BSON::ObjectId('4d12e8b97adf350bd2000079'), BSON::ObjectId('4d12e8ba7adf350bd2000080'), BSON::ObjectId('4d12e8ba7adf350bd2000089'), BSON::ObjectId('4d12ee6b7adf350bd2000198'), BSON::ObjectId('4d12ee6b7adf350bd200019f'), BSON::ObjectId('4d12ee6c7adf350bd20001a5'), BSON::ObjectId('4d12ee6c7adf350bd20001ac'), BSON::ObjectId('4d12ee6c7adf350bd20001b2'), BSON::ObjectId('4d12ee6c7adf350bd20001b9'), BSON::ObjectId('4d12ee6c7adf350bd20001c0'), BSON::ObjectId('4d12ee6c7adf350bd20001c6'), BSON::ObjectId('4d141ca57adf35033e00006e'), BSON::ObjectId('4d141ca57adf35033e000075'), BSON::ObjectId('4d1420aa7adf350705000003'), BSON::ObjectId('4d1420aa7adf35070500000a'), BSON::ObjectId('4d1420f47adf350705000011'), BSON::ObjectId('4d1420f57adf350705000015'), BSON::ObjectId('4d1420f57adf350705000018'), BSON::ObjectId('4d1420f57adf35070500001c'), BSON::ObjectId('4d1420f57adf350705000023'), BSON::ObjectId('4d1420f57adf350705000026'), BSON::ObjectId('4d14215f7adf35070500004b'), BSON::ObjectId('4d14215f7adf350705000052'), BSON::ObjectId('4d14215f7adf350705000055'), BSON::ObjectId('4d14215f7adf350705000059'), BSON::ObjectId('4d14215f7adf35070500005c'), BSON::ObjectId('4d14215f7adf350705000060'), BSON::ObjectId('4d14215f7adf350705000067'), BSON::ObjectId('4d14215f7adf35070500006a')] Company.first.people.collect(&:id) = [BSON::ObjectId('4d14215f7adf35070500004b'), BSON::ObjectId('4d14215f7adf350705000052'), BSON::ObjectId('4d14215f7adf350705000055'), BSON::ObjectId('4d14215f7adf350705000059'), BSON::ObjectId('4d14215f7adf35070500005c'), BSON::ObjectId('4d14215f7adf350705000060'), BSON::ObjectId('4d14215f7adf350705000067'), BSON::ObjectId('4d14215f7adf35070500006a')] Isn't the Company.first.person_ids array be only storing the ids shown by Company.first.people.collect(&:id) It would be helpful if some one tell me when to best use stored_as = :array method. Do stored_as = :array increase querying performance? Thanks

    Read the article

  • Can we represent bit fields in JSON/BSON?

    - by zubair
    We have a dozen simulators talking to each other on UDP. The interface definition is managed in a database. The simulators are written using different languages; mostly C++, some in Java and C#. Currently, when systems engineer makes changes in the interface definition database, simulator developers manually update the communication data structures in their code. The data is mostly 2-5 bytes with bit fields for each signal. What I want to do is to generate one file from interface definition database describing byte and bit field definitions and let each developer add it to his simulator code with minimal fuss. I looked at JSON/BSON but couldn't find a way to represent bit fields in it. Thanks Zubair

    Read the article

  • Ruby. Mongoid. Relations

    - by Scepion1d
    I've encountered some problems with MongoID. I have three models: require 'mongoid' class Configuration include Mongoid::Document belongs_to :user field :links, :type => Array field :root, :type => String field :objects, :type => Array field :categories, :type => Array has_many :entries end class TimeDim include Mongoid::Document field :day, :type => Integer field :month, :type => Integer field :year, :type => Integer field :day_of_week, :type => Integer field :minute, :type => Integer field :hour, :type => Integer has_many :entries end class Entry include Mongoid::Document belongs_to :configuration belongs_to :time_dim field :category, :type => String # any other dynamic fields end Creating documents for Configurations and TimeDims is successful. But when i've trying to execute following code: params = Hash.new params[:configuration] = config # an instance of Configuration from DB entry.each do |key, value| params[key.to_sym] = value # String end unless Entry.exists?(conditions: params) params[:time_dim] = self.generate_time_dim # an instance of TimeDim from DB params[:category] = self.detect_category(descr) # String Entry.new(params).save end ... i saw following output: /home/scepion1d/Workspace/RubyMine/dana-x/.bundle/ruby/1.9.1/gems/bson-1.6.1/lib/bson/bson_c.rb:24:in `serialize': Cannot serialize an object of class Configuration into BSON. (BSON::InvalidDocument) from /home/scepion1d/Workspace/RubyMine/dana-x/.bundle/ruby/1.9.1/gems/bson-1.6.1/lib/bson/bson_c.rb:24:in `serialize' from /home/scepion1d/Workspace/RubyMine/dana-x/.bundle/ruby/1.9.1/gems/mongo-1.6.1/lib/mongo/cursor.rb:604:in `construct_query_message' from /home/scepion1d/Workspace/RubyMine/dana-x/.bundle/ruby/1.9.1/gems/mongo-1.6.1/lib/mongo/cursor.rb:465:in `send_initial_query' from /home/scepion1d/Workspace/RubyMine/dana-x/.bundle/ruby/1.9.1/gems/mongo-1.6.1/lib/mongo/cursor.rb:458:in `refresh' from /home/scepion1d/Workspace/RubyMine/dana-x/.bundle/ruby/1.9.1/gems/mongo-1.6.1/lib/mongo/cursor.rb:128:in `next' from /home/scepion1d/Workspace/RubyMine/dana-x/.bundle/ruby/1.9.1/gems/mongo-1.6.1/lib/mongo/db.rb:509:in `command' from /home/scepion1d/Workspace/RubyMine/dana-x/.bundle/ruby/1.9.1/gems/mongo-1.6.1/lib/mongo/cursor.rb:191:in `count' from /home/scepion1d/Workspace/RubyMine/dana-x/.bundle/ruby/1.9.1/gems/mongoid-2.4.6/lib/mongoid/cursor.rb:42:in `block in count' from /home/scepion1d/Workspace/RubyMine/dana-x/.bundle/ruby/1.9.1/gems/mongoid-2.4.6/lib/mongoid/collections/retry.rb:29:in `retry_on_connection_failure' from /home/scepion1d/Workspace/RubyMine/dana-x/.bundle/ruby/1.9.1/gems/mongoid-2.4.6/lib/mongoid/cursor.rb:41:in `count' from /home/scepion1d/Workspace/RubyMine/dana-x/.bundle/ruby/1.9.1/gems/mongoid-2.4.6/lib/mongoid/contexts/mongo.rb:93:in `count' from /home/scepion1d/Workspace/RubyMine/dana-x/.bundle/ruby/1.9.1/gems/mongoid-2.4.6/lib/mongoid/criteria.rb:45:in `count' from /home/scepion1d/Workspace/RubyMine/dana-x/.bundle/ruby/1.9.1/gems/mongoid-2.4.6/lib/mongoid/finders.rb:60:in `exists?' from /home/scepion1d/Workspace/RubyMine/dana-x/crawler/crawler.rb:110:in `block (2 levels) in push_entries_to_db' from /home/scepion1d/Workspace/RubyMine/dana-x/crawler/crawler.rb:103:in `each' from /home/scepion1d/Workspace/RubyMine/dana-x/crawler/crawler.rb:103:in `block in push_entries_to_db' from /home/scepion1d/Workspace/RubyMine/dana-x/crawler/crawler.rb:102:in `each' from /home/scepion1d/Workspace/RubyMine/dana-x/crawler/crawler.rb:102:in `push_entries_to_db' from main_starter.rb:15:in `<main>' Can anyone tell what am I doing wrong?

    Read the article

  • Haskell, mongodb, date

    - by r.sendecky
    How would I insert or auto insert date into mongodb from haskell? What is the best way to convert from mongo date type to haskell data type? Say, in a situation where I insert blog post records (any haskell web framework) and I want to date stamp every record automatically. How would I go about it? The question is more about type conversion and mongodb date type creation from within haskell driver.

    Read the article

  • MongoDB search in C#

    - by user3684208
    I have a problem with querying MongoDB. In my code I have a method Get which has as a parametar a Dictionary. It should go through the database and query it, comparing string and then object. So, i always get a problem with this object part, QueryDocument won't take in an object type because it isn't an BsonValue. I have tried to cast it but it won't work. Do you have any suggestions ? Thanks Code part : public List<ExceptionViewModel> Get(Dictionary<string, object> FilteredExceptions) { MongoClient mongo = new MongoClient(); MongoServer mongoServer = mongo.GetServer(); MongoDatabase db = mongoServer.GetDatabase("Aplikacija"); MongoCollection collection = db.GetCollection("Exceptions"); List<ExceptionViewModel> Get = new List<ExceptionViewModel>(); foreach (KeyValuePair<string,object> item in FilteredExceptions) { var query = new QueryDocument(item.Key.ToString(),item.Value); foreach (ExceptionViewModel exception in collection.FindAs<ExceptionViewModel>(query)) { Console.WriteLine("{0}", exception.BrowserName); } } return Get; }

    Read the article

  • Problem with MongoDB Ruby Driver

    - by Paul
    I'm on Ubuntu, and I've done install gem mongo which reported Successfully installed bson-1.0 Successfully installed mongo-1.0 2 gems installed I've started mongod Now I cd to the mongo gem directory and try > ruby examples/simple.rb and I get the error ./examples/../lib/mongo.rb:31:in `require': no such file to load -- bson (LoadError) from ./examples/../lib/mongo.rb:31 from examples/simple.rb:3:in `require' from examples/simple.rb:3 which I can't make sense of, since the bson gem is installed > gem list *** LOCAL GEMS *** bson (1.0) bson_ext (1.0) mongo (1.0) rack (1.1.0) sinatra (1.0) Any suggestions what's up here?

    Read the article

  • Exception in morphia-0.93-SNAPSHOT.jar

    - by Rupeshit
    Hi guys, I am trying to mapp pojo class to mongodb using morphia-0.93-SNAPSHOT.jar but it is throwing exception that "java.lang.NoClassDefFoundError: org/bson/types/CodeWScope" which is bson's exception.So I am not able to run those programs.So please can anybody help me to solved this problem

    Read the article

  • Encoding::UndefinedConversionError from email body

    - by raam86
    using mail for ruby I am getting this message: mail.rb:22:in `encode': "\xC7" from ASCII-8BIT to UTF-8 (Encoding::UndefinedConversionError) from mail.rb:22:in `<main>' If I remove encode I get a message ruby /var/lib/gems/1.9.1/gems/bson-1.7.0/lib/bson/bson_ruby.rb:63:in `rescue in to_utf8_binary': String not valid utf-8: "<div dir=\"ltr\"><div class=\"gmail_quote\">l<br><br><br><div dir=\"ltr\"><div class=\"gmail_quote\"><br><br><br><div dir=\"ltr\"><div class=\"gmail_quote\"><br><br><br><div dir=\"ltr\"><div dir=\"rtl\">\xC7\xE1\xE4\xD5 \xC8\xC7\xE1\xE1\xDB\xC9 \xC7\xE1\xDA\xD1\xC8\xED\xC9</div></div>\r\n</div><br></div>\r\n</div><br></div>\r\n</div><br></div>" (BSON::InvalidStringEncoding) This is my code: require 'mail' require 'mongo' connection = Mongo::Connection.new db = connection.db("DB") db = Mongo::Connection.new.db("DB") newsCollection = db["news"] Mail.defaults do retriever_method :pop3, :address => "pop.gmail.com", :port => 995, :user_name => 'my_username', :password => '*****', :enable_ssl => true end emails = Mail.last #Checks if email is multipart and decods accordingly. Put to extract UTF8 from body plain_part = emails.multipart? ? (emails.text_part ? emails.text_part.body.decoded : nil) : emails.body.decoded html_part = emails.html_part ? emails.html_part.body.decoded : nil mongoMessage = {"date" => emails.date.to_s , "subject" => emails.subject , "body" => plain_part.encode('UTF-8') } msgID = newsCollection.insert(mongoMessage) #add the document to the database and returns it's ID puts msgID For English and Hebrew it works perfectly but it seems gmail is sending arabic with different encoding. Replacing UTF-8 with ASCII-8BIT gives a similar error. I get the same result when using plain_part for plain email messages. I am handling emails from one specific source so I can put html_part with confidence it's not causing the error. To make it extra weird Subject in Arabic is rendered perfectly. What encoding should I use?

    Read the article

  • restrict documents for mapreduce with mongoid

    - by theBernd
    I implemented the pearson product correlation via map / reduce / finalize. The missing part is to restrict the documents (representing users) to be processed via a filter query. For a simple query like mapreduce(mapper, reducer, :finalize => finalizer, :query => { :name => 'Bernd' }) I get this to work. But my filter criteria is a little bit more complicated: I have one set of preferences which need to have at least one common element and another set of preferences which may not have a common element. In a later step I also want to restrict this to documents (users) within a certain geographical distance. Currently I have this code working in my map function, but I would prefer to separate this into either query params as supported by mongoid or a javascript function. All my attempts to solve this failed since the code is either ignored or raises an error. I did a couple of tests. A regular find like User.where(:name.in => ['Arno', 'Bernd', 'Claudia']) works and returns #<Mongoid::Criteria:0x00000101f0ea40 @selector={:name=>{"$in"=>["Arno", "Bernd", "Claudia"]}}, @options={}, @klass=User, @documents=[]> Trying the same with mapreduce User.collection. mapreduce(mapper, reducer, :finalize => finalizer, :query => { :name.in => ['Arno', 'Bernd', 'Claudia'] }) fails with `serialize': keys must be strings or symbols (TypeError) in bson-1.1.5 The intermediate query parameter looks like this :query=>{#<Mongoid::Criterion::Complex:0x00000101a209e8 @key=:name, @operator="in">=>["Arno", "Bernd", "Claudia"]} and at least @operator looks a bit weird to me. I'm also uncertain if the class name can be omitted. BTW - I'm using mongodb 1.6.5-x86_64, and the mongoid 2.0.0.beta.20, mongo 1.1.5 and bson 1.1.5 gems on MacOS. What am I doing wrong? Thanks in advance.

    Read the article

  • Grails Domain.get() returns null for mongo's ObjectId

    - by Shashank Agrawal
    I'm using grails 2.3.5 with mongodb 3.0.1 and no hibernate installed. I've a domain class which uses mongo's ObjectId. import org.bson.types.ObjectId class Category { ObjectId id String name } And has a record in mongo database: { "_id": ObjectId("53f6c34c33a429240e2ab471"), "name": "art", "version": NumberLong("41") } When I do, Category.get(new ObjectId("53f6c34c33a429240e2ab471")) somewhere in grails app, it returns null but when I do Category.get("53f6c34c33a429240e2ab471") it then actually returns the result. Why, get() method does not process ObjectId type?

    Read the article

  • mongoexport csv output array values

    - by 9point6
    I'm using mongoexport to export some collections into CSV files, however when I try to target fields which are members of an array I cannot get it to export correctly. command I'm using: mongoexport -d db -c collection -fieldFile fields.txt --csv > out.csv and the contents of fields.txt is similar to id name address[0].line1 address[0].line2 address[0].city address[0].country address[0].postcode where the BSON data would be: { "id": 1, "name": "example", "address": [ { "line1": "flat 123", "line2": "123 Fake St.", "city": "London", "country": "England", "postcode": "N1 1AA" } ] } what is the correct syntax for exporting the contents of an array?

    Read the article

  • Compressing/compacting messages over websocket on Node.js

    - by icelava
    We have a websocket implementation (Node.js/Sock.js) that exchanges data as JSON strings. As our use cases grow, so have the size of the data transmitted across the wire. The websocket protocol does not natively offer any compression feature, so in order to reduce the size of our messages we'd have to manually do something about the serialisation. There appear to be a variety of LZW implementations in Javascript, some which confuses me on their compatibility for in-browser use only versus transmission across the wire due to my lack of understanding on low-level encodings. More importantly, all of them seem to take a noticeable performance drag when Javascript is the engine doing the compression/decompression work, which is not desirable for mobile devices. Looking instead other forms of compact serialisation, MessagePack does not appear to have any active support in Javascript itself; BSON does not have any Javascript implementation; and an alternative BISON project that I tested does not deserialise everything back to their original values (large numbers), and it does not look like any further development will happen either. What are some other options others have explored for Node.js?

    Read the article

  • MongoMapper - undefined method `keys'

    - by nimnull
    I'm trying to create a Document instance with params passed from the post-submitted form: My Mongo mapped document looks like: class Good include MongoMapper::Document key :title, String key :cost, Float key :description, String timestamps! many :attributes validates_presence_of :title, :cost end And create action: def create @good = Good.new(params[:good]) if @good.save redirect_to @good else render :new end end params[:good] containes all valid document attributes - {"good"={"cost"="2.30", "title"="Test good", "description"="Test description"}}, but I've got a strange error from rails: undefined method `keys' for ["title", "Test good"]:Array My gem list: *** LOCAL GEMS *** actionmailer (2.3.8) actionpack (2.3.8) activerecord (2.3.8) activeresource (2.3.8) activesupport (2.3.8) authlogic (2.1.4) bson (1.0) bson_ext (1.0) compass (0.10.1) default_value_for (0.1.0) haml (3.0.6) jnunemaker-validatable (1.8.4) mongo (1.0) mongo_ext (0.19.3) mongo_mapper (0.7.6) plucky (0.1.1) rack (1.1.0) rails (2.3.8) rake (0.8.7) rubygems-update (1.3.7) Any suggestions how to fix this error?

    Read the article

  • C++ property system interface for game editors (reflection system)

    - by Cristopher Ismael Sosa Abarca
    I have designed an reusable game engine for an project, and their functionality is like this: Is a completely scripted game engine instead of the usual scripting languages as Lua or Python, this uses Runtime-Compiled C++, and an modified version of Cistron (an component-based programming framework).to be compatible with Runtime-Compiled C++ and so on. Using the typical GameObject and Component classes of the Component-based design pattern, is serializable via JSON, BSON or Binary useful for selecting which objects will be loaded the next time. The main problem: We want to use our custom GameObjects and their components properties in our level editor, before used hardcoded functions to access GameObject base class virtual functions from the derived ones, if do you want to modify an property specifically from that class you need inside into the code, this situation happens too with the derived classes of Component class, in little projects there's no problem but for larger projects becomes tedious, lengthy and error-prone. I've researched a lot to find a solution without luck, i tried with the Ogitor's property system (since our engine is Ogre-based) but we find it inappropiate for the component-based design and it's limited only for the Ogre classes and can lead to performance overhead, and we tried some code we find in the Internet we tested it and worked a little but we considered the macro and lambda abuse too horrible take a look (some code omitted): IWE_IMPLEMENT_PROP_BEGIN(CBaseEntity) IWE_PROP_LEVEL_BEGIN("Editor"); IWE_PROP_INT_S("Id", "Internal id", m_nEntID, [](int n) {}, true); IWE_PROP_LEVEL_END(); IWE_PROP_LEVEL_BEGIN("Entity"); IWE_PROP_STRING_S("Mesh", "Mesh used for this entity", m_pModelName, [pInst](const std::string& sModelName) { pInst->m_stackMemUndoType.push(ENT_MEM_MESH); pInst->m_stackMemUndoStr.push(pInst->getModelName()); pInst->setModel(sModelName, false); pInst->saveState(); }, false); IWE_PROP_VECTOR3_S("Position", m_vecPosition, [pInst](float fX, float fY, float fZ) { pInst->m_stackMemUndoType.push(ENT_MEM_POSITION); pInst->m_stackMemUndoVec3.push(pInst->getPosition()); pInst->saveState(); pInst->m_vecPosition.Get()[0] = fX; pInst->m_vecPosition.Get()[1] = fY; pInst->m_vecPosition.Get()[2] = fZ; pInst->setPosition(pInst->m_vecPosition); }, false); IWE_PROP_QUATERNION_S("Orientation (Quat)", m_quatOrientation, [pInst](float fW, float fX, float fY, float fZ) { pInst->m_stackMemUndoType.push(ENT_MEM_ROTATE); pInst->m_stackMemUndoQuat.push(pInst->getOrientation()); pInst->saveState(); pInst->m_quatOrientation.Get()[0] = fW; pInst->m_quatOrientation.Get()[1] = fX; pInst->m_quatOrientation.Get()[2] = fY; pInst->m_quatOrientation.Get()[3] = fZ; pInst->setOrientation(pInst->m_quatOrientation); }, false); IWE_PROP_LEVEL_END(); IWE_IMPLEMENT_PROP_END() We are finding an simplified way to this, without leading confusing the programmers, (will be released to the public) i find ways to achieve this but they are only available for the common scripting as Lua or editors using C#. also too portable, we can write "wrappers" for different GUI toolkits as Qt or GTK, also i'm thinking to using Boost.Wave to get additional macro functionality without creating my own compiler. The properties designed to use in the editor they are removed in the game since the save file contains their data and loads it using an simple 'load' function to reduce unnecessary code bloat may will be useful if some GameObject property wants to be hidden instead. In summary, there's a way to implement an reflection(property) system for a level editor based in properties from derived classes? Also we can use C++11 and Boost (restricted only to Wave and PropertyTree)

    Read the article

  • Versioning freindly, extendible binary file format

    - by Bas Bossink
    In the project I'm currently working on there is a need to save a sizeable data structure to disk. Being in optimist I thought their must be a standard solution for such a problem however upto now I haven't found a solution that satisfies the following requirements: .net 2.0 support, preferably with a foss implementation version friendly (this should be interpreted as reading an old version of the format should be relatively simple if the changes in the underlying data structure are simple, say adding/dropping fields) ability to do some form of random access where part of the data can be extended after initial creation (think of this as extending intermediate results) space and time efficient (xml has been excluded as option given this requierement) Options considered so far: Protocol Buffers : was turned down by verdict of the documentation about Large Data Sets since this comment suggest adding another layer on top, this would call for additional complexity which I wish to have handled by the file format itself. HDF5,EXI : do not seem to have .net implementations SQLite : the data structure at hand would result in a pretty complex table structure that seems to heavyweight for the intended use BSON : does not appear to support requirement 3. Fast Infoset : only seems to have buyware .net implementations Any recommendations or pointers are greatly appreciated. Furthermore if you believe any of the information above is not true please provide pointers/examples to proove me wrong.

    Read the article

  • Versioning friendly, extendible binary file format

    - by Bas Bossink
    In the project I'm currently working on there is a need to save a sizable data structure to disk (edit: think dozens of MB's). Being an optimist, I thought that there must be a standard solution for such a problem; however, up to now I haven't found a solution that satisfies the following requirements: .NET 2.0 support, preferably with a FOSS implementation Version friendly (this should be interpreted as: reading an old version of the format should be relatively simple if the changes in the underlying data structure are simple, say adding/dropping fields) Ability to do some form of random access where part of the data can be extended after initial creation (think of this as extending intermediate results) Space and time efficient (XML has been excluded as option given this requirement) Options considered so far: Protocol Buffers: was turned down by verdict of the documentation about Large Data Sets - since this comment suggested adding another layer on top, this would call for additional complexity which I wish to have handled by the file format itself. HDF5,EXI: do not seem to have .net implementations SQLite/SQL Server Compact edition: the data structure at hand would result in a pretty complex table structure that seems too heavyweight for the intended use BSON: does not appear to support requirement 3. Fast Infoset: only seems to have paid .NET implementations. Any recommendations or pointers are greatly appreciated. Furthermore if you believe any of the information above is not true, please provide pointers/examples to prove me wrong.

    Read the article

  • Running bundle install fails trying to remote fetch from rubygems.org/quick/Marshal...

    - by dreeves
    I'm getting a strange error when doing bundle install: $ bundle install Fetching source index for http://rubygems.org/ rvm/rubies/ree-1.8.7-2010.02/lib/ruby/site_ruby/1.8/rubygems/remote_fetcher.rb:304 :in `open_uri_or_path': bad response Not Found 404 (http://rubygems.org/quick/Marshal.4.8/resque-scheduler-1.09.7.gemspec.rz) (Gem::RemoteFetcher::FetchError) I've tried bundle update, gem source -c, gem update --system, gem cleanup, etc etc. Nothing seems to solve this. I notice that the URL beginning with http://rubygems.org/quick does seem to be a 404 -- I don't think that's any problem with my network, though if that's reachable for anyone else then that would be a simple explanation for my problem. More hints: If I just gem install resque-scheduler it works fine: $ gem install resque-scheduler Successfully installed resque-scheduler-1.9.7 1 gem installed Installing ri documentation for resque-scheduler-1.9.7... Installing RDoc documentation for resque-scheduler-1.9.7... And here's my Gemfile: source 'http://rubygems.org' gem 'json' gem 'rails', '>=3.0.0' gem 'mongo' gem 'mongo_mapper', :git => 'git://github.com/jnunemaker/mongomapper', :branch => 'rails3' gem 'bson_ext', '1.1' gem 'bson', '1.1' gem 'mm-multi-parameter-attributes', :git=>'git://github.com/rlivsey/mm-multi-parameter-attributes.git' gem 'devise', '~>1.1.3' gem 'devise_invitable', '~> 0.3.4' gem 'devise-mongo_mapper', :git => 'git://github.com/collectiveidea/devise-mongo_mapper' gem 'carrierwave', :git => 'git://github.com/rsofaer/carrierwave.git' , :branch => 'master' gem 'mini_magick' gem 'jquery-rails', '>= 0.2.6' gem 'resque' gem 'resque-scheduler' gem 'SystemTimer' gem 'capistrano' gem 'will_paginate', '3.0.pre2' gem 'twitter', '~> 1.0.0' gem 'oauth', '~> 0.4.4'

    Read the article

  • How to efficiently store and update binary data in Mongodb?

    - by Rocketman
    I am storing a large binary array within a document. I wish to continually add bytes to this array and sometimes change the value of existing bytes. I was looking for some $append_bytes and $replace_bytes type of modifiers but it appears that the best I can do is $push for arrays. It seems like this would be doable by performing seek-write type operations if I had access somehow to the underlying bson on disk, but it does not appear to me that there is anyway to do this in mongodb (and probably for good reason). If I were instead to just query this binary array, edit or add to it, and then update the document by rewriting the entire field, how costly will this be? Each binary array will be on the order of 1-2MB, and updates occur once every 5 minutes and across 1000s of documents. Worse, yet there is no easy way to spread these out (in time) and they will usually be happening close to one another on the 5 minute intervals. Does anyone have a good feel for how disastrous this will be? Seems like it would be problematic. An alternative would be to store this binary data as separate files on disk, implement a thread pool to efficiently manipulate the files on disk, and reference the filename from my mongodb document. (I'm using python and pymongo so I was looking at pytables). I'd prefer to avoid this though if possible. Is there any other alternative that I am overlooking here? Thanks in advnace.

    Read the article

  • Mongodb - how to deserialze when a property has an Interface return type

    - by Mark Kelly
    I'm attempting to avoid introducing any dependencies between my Data layer and client code that makes use of this layer, but am running into some problems when attempting to do this with Mongo (using the MongoRepository) MongoRepository shows examples where you create Types that reflect your data structure, and inherit Entity where required. Eg. [CollectionName("track")] public class Track : Entity { public string name { get; set; } public string hash { get; set; } public Artist artist { get; set; } public List<Publish> published {get; set;} public List<Occurence> occurence {get; set;} } In order to make use of these in my client code, I'd like to replace the Mongo-specific types with Interfaces, e.g: [CollectionName("track")] public class Track : Entity, ITrackEntity { public string name { get; set; } public string hash { get; set; } public IArtistEntity artist { get; set; } public List<IPublishEntity> published {get; set;} public List<IOccurenceEntity> occurence {get; set;} } However, the Mongo driver doesn't know how to treat these interfaces, and I understandably get the following error: An error occurred while deserializing the artist property of class sf.data.mongodb.entities.Track: No serializer found for type sf.data.IArtistEntity. --- MongoDB.Bson.BsonSerializationException: No serializer found for type sf.data.IArtistEntity. Does anyone have any suggestions about how I should approach this?

    Read the article

  • What is the most efficient way to convert to binary and back in C#?

    - by Saad Imran.
    I'm trying to write a general purpose socket server for a game I'm working on. I know I could very well use already built servers like SmartFox and Photon, but I wan't to go through the pain of creating one myself for learning purposes. I've come up with a BSON inspired protocol to convert the the basic data types, their arrays, and a special GSObject to binary and arrange them in a way so that it can be put back together into object form on the client end. At the core, the conversion methods utilize the .Net BitConverter class to convert the basic data types to binary. Anyways, the problem is performance, if I loop 50,000 times and convert my GSObject to binary each time it takes about 5500ms (the resulting byte[] is just 192 bytes per conversion). I think think this would be way too slow for an MMO that sends 5-10 position updates per second with a 1000 concurrent users. Yes, I know it's unlikely that a game will have a 1000 users on at the same time, but like I said earlier this is supposed to be a learning process for me, I want to go out of my way and build something that scales well and can handle at least a few thousand users. So yea, if anyone's aware of other conversion techniques or sees where I'm loosing performance I would appreciate the help. GSBitConverter.cs This is the main conversion class, it adds extension methods to main datatypes to convert to the binary format. It uses the BitConverter class to convert the base types. I've shown only the code to convert integer and integer arrays, but the rest of the method are pretty much replicas of those two, they just overload the type. public static class GSBitConverter { public static byte[] ToGSBinary(this short value) { return BitConverter.GetBytes(value); } public static byte[] ToGSBinary(this IEnumerable<short> value) { List<byte> bytes = new List<byte>(); short length = (short)value.Count(); bytes.AddRange(length.ToGSBinary()); for (int i = 0; i < length; i++) bytes.AddRange(value.ElementAt(i).ToGSBinary()); return bytes.ToArray(); } public static byte[] ToGSBinary(this bool value); public static byte[] ToGSBinary(this IEnumerable<bool> value); public static byte[] ToGSBinary(this IEnumerable<byte> value); public static byte[] ToGSBinary(this int value); public static byte[] ToGSBinary(this IEnumerable<int> value); public static byte[] ToGSBinary(this long value); public static byte[] ToGSBinary(this IEnumerable<long> value); public static byte[] ToGSBinary(this float value); public static byte[] ToGSBinary(this IEnumerable<float> value); public static byte[] ToGSBinary(this double value); public static byte[] ToGSBinary(this IEnumerable<double> value); public static byte[] ToGSBinary(this string value); public static byte[] ToGSBinary(this IEnumerable<string> value); public static string GetHexDump(this IEnumerable<byte> value); } Program.cs Here's the the object that I'm converting to binary in a loop. class Program { static void Main(string[] args) { GSObject obj = new GSObject(); obj.AttachShort("smallInt", 15); obj.AttachInt("medInt", 120700); obj.AttachLong("bigInt", 10900800700); obj.AttachDouble("doubleVal", Math.PI); obj.AttachStringArray("muppetNames", new string[] { "Kermit", "Fozzy", "Piggy", "Animal", "Gonzo" }); GSObject apple = new GSObject(); apple.AttachString("name", "Apple"); apple.AttachString("color", "red"); apple.AttachBool("inStock", true); apple.AttachFloat("price", (float)1.5); GSObject lemon = new GSObject(); apple.AttachString("name", "Lemon"); apple.AttachString("color", "yellow"); apple.AttachBool("inStock", false); apple.AttachFloat("price", (float)0.8); GSObject apricoat = new GSObject(); apple.AttachString("name", "Apricoat"); apple.AttachString("color", "orange"); apple.AttachBool("inStock", true); apple.AttachFloat("price", (float)1.9); GSObject kiwi = new GSObject(); apple.AttachString("name", "Kiwi"); apple.AttachString("color", "green"); apple.AttachBool("inStock", true); apple.AttachFloat("price", (float)2.3); GSArray fruits = new GSArray(); fruits.AddGSObject(apple); fruits.AddGSObject(lemon); fruits.AddGSObject(apricoat); fruits.AddGSObject(kiwi); obj.AttachGSArray("fruits", fruits); Stopwatch w1 = Stopwatch.StartNew(); for (int i = 0; i < 50000; i++) { byte[] b = obj.ToGSBinary(); } w1.Stop(); Console.WriteLine(BitConverter.IsLittleEndian ? "Little Endian" : "Big Endian"); Console.WriteLine(w1.ElapsedMilliseconds + "ms"); } Here's the code for some of my other classes that are used in the code above. Most of it is repetitive. GSObject GSArray GSWrappedObject

    Read the article

  • YouTube Scalability Lessons

    - by Bertrand Matthelié
    @font-face { font-family: "Arial"; }@font-face { font-family: "Courier New"; }@font-face { font-family: "Wingdings"; }@font-face { font-family: "Calibri"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }h2 { margin: 12pt 0cm 3pt; page-break-after: avoid; font-size: 14pt; font-family: "Times New Roman"; font-style: italic; }a:link, span.MsoHyperlink { color: blue; text-decoration: underline; }a:visited, span.MsoHyperlinkFollowed { color: purple; text-decoration: underline; }span.Heading2Char { font-family: Calibri; font-weight: bold; font-style: italic; }div.Section1 { page: Section1; }ol { margin-bottom: 0cm; }ul { margin-bottom: 0cm; } Very interesting blog post by Todd Hoff at highscalability.com presenting “7 Years of YouTube Scalability Lessons in 30 min” based on a presentation from Mike Solomon, one of the original engineers at YouTube: …. The key takeaway away of the talk for me was doing a lot with really simple tools. While many teams are moving on to more complex ecosystems, YouTube really does keep it simple. They program primarily in Python, use MySQL as their database, they’ve stuck with Apache, and even new features for such a massive site start as a very simple Python program. That doesn’t mean YouTube doesn’t do cool stuff, they do, but what makes everything work together is more a philosophy or a way of doing things than technological hocus pocus. What made YouTube into one of the world’s largest websites? Read on and see... Stats @font-face { font-family: "Arial"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; } 4 billion Views a day 60 hours of video is uploaded every minute 350+ million devices are YouTube enabled Revenue double in 2010 The number of videos has gone up 9 orders of magnitude and the number of developers has only gone up two orders of magnitude. 1 million lines of Python code Stack @font-face { font-family: "Arial"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; } Python - most of the lines of code for YouTube are still in Python. Everytime you watch a YouTube video you are executing a bunch of Python code. Apache - when you think you need to get rid of it, you don’t. Apache is a real rockstar technology at YouTube because they keep it simple. Every request goes through Apache. Linux - the benefit of Linux is there’s always a way to get in and see how your system is behaving. No matter how bad your app is behaving, you can take a look at it with Linux tools like strace and tcpdump. MySQL - is used a lot. When you watch a video you are getting data from MySQL. Sometime it’s used a relational database or a blob store. It’s about tuning and making choices about how you organize your data. Vitess- a  new project released by YouTube, written in Go, it’s a frontend to MySQL. It does a lot of optimization on the fly, it rewrites queries and acts as a proxy. Currently it serves every YouTube database request. It’s RPC based. Zookeeper - a distributed lock server. It’s used for configuration. Really interesting piece of technology. Hard to use correctly so read the manual Wiseguy - a CGI servlet container. Spitfire - a templating system. It has an abstract syntax tree that let’s them do transformations to make things go faster. Serialization formats - no matter which one you use, they are all expensive. Measure. Don’t use pickle. Not a good choice. Found protocol buffers slow. They wrote their own BSON implementation, which is 10-15 time faster than the one you can download. ...Contiues. Read the blog Watch the video

    Read the article

1 2  | Next Page >