Search Results

Search found 6196 results on 248 pages for 'minimum requirements'.

Page 205/248 | < Previous Page | 201 202 203 204 205 206 207 208 209 210 211 212  | Next Page >

  • RPC for java/python with rest support, HTML monitoring and goodies

    - by Ran
    Here's my set of requirements: I'm looking for an RPC framework such as thrift, avro, protobuf (when adding services to it) which supports: Easy and intuitive IDL. No serial numbers, no manual versioning, simple... avro is a good example for this. Works with Java and Python Supports both fast binary prorocol, as well as HTTP based restful style. I'd like to be able to use it for both backend-to-backend communication (java-java or python-java) as well as frontend-to-backend communication (javascript to java). The rest support needs to include &param=value input as get/post requests (configurable per request) and output in three possible formats: json, jsonp, XML. Compact, fast, backward compatible, easy to upgrade etc... Provides some nice monitoring interfaces such as: JMX, web page status reports (e.g. packets in, packets out, error rate etc) Ops friendly... no need to take the whole site down to release new versions Both sync and asyc communication ... other goodies are welcome... Is there something out there? So far I've looked at thrift and avro and they are both nice in some ways, but don't check all my list. Thanks

    Read the article

  • How to bundle extension methods requiring configuration in a library

    - by Greg
    Hi, I would like to develop a library that I can re-use to add various methods involved in navigating/searching through a graph (nodes/relationships, or if you like vertexs/edges). The generic requirements would be: There are existing classes in the main project that already implement the equivalent of the graph class (which contains the lists of nodes / relationships), node class and relationship class (which links nodes together) - the main project likely already has persistence mechanisms for the info (e.g. these classes might be built using Entity Framework for persistance) Methods would need to be added to each of these 3 classes: (a) graph class - methods like "search all nodes", (b) node class - methods such as "find all children to depth i", c) relationship class - methods like "return relationship type", "get parent node", "get child node". I assume there would be a need to inform the library with the extending methods the class names for the graph/node/relationships table (as different project might use different names). To some extent it would need to be like how a generics collection works (where you pass the classes to the collection so it knows what they are). Need to be a way to inform the library of which node property to use for equality checks perhaps (e.g. if it were a graph of webpages the equality field to use might be the URI path) I'm assuming that using abstract base classes wouldn't really work as this would tie usage down to have to use the same persistence approach, and same class names etc. Whereas really I want to be able to, for a project that has "graph-like" characteristics, the ability to add graph searching/walking methods to it.

    Read the article

  • Rails - eager load the number of associated records, but not the record themselves.

    - by Max Williams
    I have a page that's taking ages to render out. Half of the time (3 seconds) is spent on a .find call which has a bunch of eager-loaded associations. All i actually need is the number of associated records in each case, to display in a table: i don't need the actual records themselves. Is there a way to just eager load the count? Here's a simplified example: @subjects = Subject.find(:all, :include => [:questions]) In my table, for each row (ie each subject) i just show the values of the subject fields and the number of associated questions for each subject. Can i optimise the above find call to suit these requirements? I thought about using a group field but my full call has a few different associations included, with some second-order associations, so i don't think group by will work. @subjects = Subject.find(:all, :include => [{:questions => :tags}, {:quizzes => :tags}], :order => "subjects.name") :tags in this case is a second-order association, via taggings. Here's my associations in case it's not clear what's going on. Subject has_many :questions has_many :quizzes Question belongs_to :subject has_many :taggings has_many :tags, :through => :taggings Quiz belongs_to :subject has_many :taggings has_many :tags, :through => :taggings Grateful for any advice - max

    Read the article

  • how can I save/keep-in-sync an in-memory graph of objects with the database?

    - by Greg
    Question - What is a good best practice approach for how can I save/keep-in-sync an jn-memory graph of objects with the database? Background: That is say I have the classes Node and Relationship, and the application is building up a graph of related objects using these classes. There might be 1000 nodes with various relationships between them. The application needs to query the structure hence an in-memory approach is good for performance no doubt (e.g. traverse the graph from Node X to find the root parents) The graph does need to be persisted however into a database with tables NODES and RELATIONSHIPS. Therefore what is a good best practice approach for how can I save/keep-in-sync an jn-memory graph of objects with the database? Ideal requirements would include: build up changes in-memory and then 'save' afterwards (mandatory) when saving, apply updates to database in correct order to avoid hitting any database constraints (mandatory) keep persistence mechanism separate from model, for ease in changing persistence layer if needed, e.g. don't just wrap an ADO.net DataRow in the Node and Relationship classes (desirable) mechanism for doing optimistic locking (desirable) Or is the overhead of all this for a smallish application just not worth it and I should just hit the database each time for everything? (assuming the response times were acceptable) [would still like to avoid if not too much extra overhead to remain somewhat scalable re performance]

    Read the article

  • The rules to connect a web service trough the SSL and Certificates

    - by blgnklc
    There is a web service running on tomcat on a server. It is built on Java Servlet. It is listening others to call itself on a SSL enabled http port. so its web service adreess looks like: https://172.29.12.12/axis/services/XYZClient?wsdl On the other hand I want to connect the web service above from a windows application which is built on .NET frame work. Finally, when I want to connect the web service from my computer; I get some specific erros; Firstly I get; Proxy authentication error; then I added some new line to my code; Dim cr As System.Net.NetworkCredential = New System.Net.NetworkCredential("xname", "xsurname", "xdomainname") Dim myProxy As New WebProxy("http://mar.xxxyyy.com", True) myProxy.Credentials = cr Secondly, after this modifications It says that bad request. I did not get over this error. Moreover I did try to connect the web server on the same computer. I copied my executable program to the computer where the web service runs. The error was like; The underlying connection was closed: Could not establish trust relationship for SSL/TLS secure channel PS: When I try to connect to web service by using Internet Explorer; I see firstly some warnings about accepting an unknown certificate and I click take me to web service an I get there clearly. I want to know what are the basic elements to connect a web service, could you please tell me the requirements that I have to use on my windows project. regards bk

    Read the article

  • how to Compute the average probe length for success and failure - Linear probe (Hash Tables)

    - by fang_dejavu
    hi everyone, I'm doing an assignment for my Data Structures class. we were asked to to study linear probing with load factors of .1, .2 , .3, ...., and .9. The formula for testing is: The average probe length using linear probing is roughly Success-- ( 1 + 1/(1-L)**2)/2 or Failure-- (1+1(1-L))/2. we are required to find the theoretical using the formula above which I did(just plug the load factor in the formula), then we have to calculate the empirical (which I not quite sure how to do). here is the rest of the requirements **For each load factor, 10,000 randomly generated positive ints between 1 and 50000 (inclusive) will be inserted into a table of the "right" size, where "right" is strictly based upon the load factor you are testing. Repeats are allowed. Be sure that your formula for randomly generated ints is correct. There is a class called Random in java.util. USE it! After a table of the right (based upon L) size is loaded with 10,000 ints, do 100 searches of newly generated random ints from the range of 1 to 50000. Compute the average probe length for each of the two formulas and indicate the denominators used in each calculationSo, for example, each test for a .5 load would have a table of size approximately 20,000 (adjusted to be prime) and similarly each test for a .9 load would have a table of approximate size 10,000/.9 (again adjusted to be prime). The program should run displaying the various load factors tested, the average probe for each search (the two denominators used to compute the averages will add to 100), and the theoretical answers using the formula above. .** how do I calculate the empirical success?

    Read the article

  • Looking for combinations of server and embedded database engines

    - by codeelegance
    I'm redesigning an application that will be run as both a single user and multiuser application. It is a .NET 2.0 application. I'm looking for server and embedded databases that work well together. I want to deploy the embedded database in the single user setup and of course, the server in the multiuser setup. Past releases have been based on MSDE but in the past year we've been having a lot of install issues: new installs hanging and leaving the system in an unknown state, upgrades disconnecting the database, etc. I migrated the application to SQL Server 2005 and the install is more reliable (as long as a user doesn't try to install over a broken MSDE installation). Since next year's release will be a complete redesign I figured now's the best time to address the database issue as well. The database has been abstracted from the rest of the application so I just need to choose which database(s) to use and write an implementation for each one. So far I've considered: SQL Server/ SQL Server Compact Edition Firebird (same DB engine is available in two different server modes and an embedded dll) Each has its own merits but I'm also interested in any other suggestions. This is a fairly simple program and its data requirements are simple as well. I don't expect it to strain whatever database I eventually choose. So easy configuration and deployment hold more weight than performance.

    Read the article

  • Different EF Property DataType than Storage Layer Possible?

    - by dj_kyron
    Hi, I am putting together a WCF Data Service for PatientEntities using Entity Framework. My solution needs to address these requirements: Property DateOfBirth of entity Patient is stored in SQL Server as string. It would be ideal if the entity class did not also use the "string" type but rather a DateTime type. (I would expect this to be possible since we're abstracting away from the storage layer). Where could a conversion mechanism be put in place that would convert to and from DateTime/string so that the entity and SQL Server are in sync?. I cannot change the storage layer's structure, so I have to work around it. WCF Data Services (Read-only, so no need for saving changes) need to be used since clients will be able to use LINQ expressions to consume the service. They can generate results based on any given query scenario they need and not be constrained by a single method such as GetPatient(int ID). I've tried to use DTOs, but run into problem of mapping the ObjectContext to a DTO, I don't think that is theoretically possible...or too complicated if it is. I've tried to use Self Tracking Entities but they require the metadata from the .edmx file if I'm correct, and this isn't allowing a different property data type. I also want to add customizations to my Entity getter methods so that a property "MRN" of type "string" needs to have .Replace("MR~", string.Empty) performed before it is returned. I can add this to the getter methods but the problem with that is Entity Framework will overwrite that next time it refreshes the entity classes. Is there a permanent place I can put these? Should I use POCO instead? How would that work with WCF Data Services? Where would the service grab the metadata?

    Read the article

  • DataSets to POCOs - an inquiry regarding DAL architecture

    - by alexsome
    Hello all, I have to develop a fairly large ASP.NET MVC project very quickly and I would like to get some opinions on my DAL design to make sure nothing will come back to bite me since the BL is likely to get pretty complex. A bit of background: I am working with an Oracle backend so the built-in LINQ to SQL is out; I also need to use production-level libraries so the Oracle EF provider project is out; finally, I am unable to use any GPL or LGPL code (Apache, MS-PL, BSD are okay) so NHibernate/Castle Project are out. I would prefer - if at all possible - to avoid dishing out money but I am more concerned about implementing the right solution. To summarize, there are my requirements: Oracle backend Rapid development (L)GPL-free Free I'm reasonably happy with DataSets but I would benefit from using POCOs as an intermediary between DataSets and views. Who knows, maybe at some point another DAL solution will show up and I will get the time to switch it out (yeah, right). So, while I could use LINQ to convert my DataSets to IQueryable, I would like to have a generic solution so I don't have to write a custom query for each class. I'm tinkering with reflection right now, but in the meantime I have two questions: Are there any problems I overlooked with this solution? Are there any other approaches you would recommend to convert DataSets to POCOs? Thanks in advance.

    Read the article

  • Scrum - Responding to traditional RFPs

    - by Todd Charron
    Hi all, I've seen many articles about how to put together Agile RFP's and negotiating agile contracts, but how about if you're responding to a more traditional RFP? Any advice on how to meet the requirements of the RFP while still presenting an agile approach? A lot of these traditional RFP's request specific technical implementations, timelines, and costs, while also requesting exact details about milestones and how the technical solutions will be implemented. While I'm sure in traditional waterfall it's normal to pretend that these things are facts, it seems wrong to commit to something like this if you're an agile organization just to get through the initial screening process. What methods have you used to respond to more traditional RFP's? Here's a sample one grabbed from google, http://www.investtoronto.ca/documents/rfp-web-development.pdf Particularly, "3. A detailed work plan outlining how they expect to achieve the four deliverables within the timeframe outlined. Plan for additional phases of development." and "8. The detailed cost structure, including per diem rates for team members, allocation of hours between team members, expenses and other out of pocket disbursements, and a total upset price."

    Read the article

  • To implement a remote desktop sharing solution

    - by Cameigons
    Hi, I'm on planning/modeling phase to develop a remote desktop sharing solution, which must be web browser based. In other words: an user will be able to see and interact with someone's remote desktop using his web-browser. Everything the user who wants to share his desktop will need, besides his browser, is installing an add-in, which he's going to be prompted about when necessary. The add-in is required since (afaik) no browser technology allows desktop control from an app running within the browser alone. The add-in installation process must be as simple and transparent as possible to the user (similar to AdobeConnectNow, in case anyone's acquainted with it). The user can share his desktop with lots of people at the same time, but concede desktop control to only one of them at a time(makes no sense being otherwise). Project requirements: All technology employed must be open-source license compatible Both front ends are going to be in flash (browser) Must work on Linux, Windows XP(and later) and MacOSX. Must work at least with IE7(and later) and Firefox3.0(and later). At the very least, once the sharer's stream hits the server from where it'll be broadcast, hereon it must be broadcasted in flv (so I'm thinking whether to do the encoding at the client's machine (the one sharing the desktop) or send it in some other format to the server and encode it there). Performance and scalability are important: It must be able to handle hundreds of dozens of users(one desktop sharer, the rest viewers) We'll definitely be using red5. My doubts concern mostly implementing the desktop publisher side (add-in and streamer): 1) Are you aware of other projects that I could look into for ideas? (I'm aware of bigbluebutton.org and code.google.com/p/openmeetings) 2) Should I base myself on VNC ? 3) Bearing in mind the need to have it working cross-platform, what language should I go with? (My team is very used with java and I have some knowledge of C/C++, but anything goes really). 4) Any other advices are appreciated.

    Read the article

  • Minimal "Task Queue" with stock Linux tools to leverage Multicore CPU

    - by Manuel
    What is the best/easiest way to build a minimal task queue system for Linux using bash and common tools? I have a file with 9'000 lines, each line has a bash command line, the commands are completely independent. command 1 > Logs/1.log command 2 > Logs/2.log command 3 > Logs/3.log ... My box has more than one core and I want to execute X tasks at the same time. I searched the web for a good way to do this. Apparently, a lot of people have this problem but nobody has a good solution so far. It would be nice if the solution had the following features: can interpret more than one command (e.g. command; command) can interpret stream redirects on the lines (e.g. ls > /tmp/ls.txt) only uses common Linux tools Bonus points if it works on other Unix-clones without too exotic requirements.

    Read the article

  • Android "first time" app user tutorial

    - by EGHDK
    I'm trying to create an opening tutorial that consists of four panes for my application, but since I'm new to Android, I want to make sure I'm considering all of my options before marking this task as "complete". I know of three ways, I can only really accomplish one. There are no requirements for this tutorial, but some "wanted features" would be a sliding action to each pane would be nice as well as the image and the bottom (navigation circles) not moving and the title on top not moving. One Way 4 separate activities and 4 separate layouts The red circled items are textViews that are centered horizontally and pushed off the top. The white circled items are imageViews that are centered horizontally and vertically. The purple circled are imageViews that are centered horizontally and pushed off the bottom. Second Way 4 fragments on one activity Fragments were difficult to learn, but the more I read about them/see tutorials on them, they seem to only really be used for tablets. Would it be a valid way to accomplish this? Third Way ViewPager? http://android-developers.blogspot.com/2011/08/horizontal-view-swiping-with-viewpager.html I've never used this before, but I know it's an option. Final Question Which way is used more often/what's the proper way to implement this? Is there any way to only have the middle part (the image) slide in, but the title (top) and the navigation images (bottom) just change once the image slides in?

    Read the article

  • How to structure an index for type ahead for extremely large dataset using Lucene or similar?

    - by Pete
    I have a dataset of 200million+ records and am looking to build a dedicated backend to power a type ahead solution. Lucene is of interest given its popularity and license type, but I'm open to other open source suggestions as well. I am looking for advice, tales from the trenches, or even better direct instruction on what I will need as far as amount of hardware and structure of software. Requirements: Must have: The ability to do starts with substring matching (I type in 'st' and it should match 'Stephen') The ability to return results very quickly, I'd say 500ms is an upper bound. Nice to have: The ability to feed relevance information into the indexing process, so that, for example, more popular terms would be returned ahead of others and not just alphabetical, aka Google style. In-word substring matching, so for example ('st' would match 'bestseller') Note: This index will purely be used for type ahead, and does not need to serve standard search queries. I am not worried about getting advice on how to set up the front end or AJAX, as long as the index can be queried as a service or directly via Java code. Up votes for any useful information that allows me to get closer to an enterprise level type ahead solution

    Read the article

  • Trying to find a PHP5 API-based embeddable CMS

    - by StrangeElement
    I've been making the rounds for a CMS that I can use as an API, in a sort of "embedded" mode. I mean by this that I don't want the CMS to do any logic or presentation. I want it to be used as an API, which I can then use within an existing site. I don't want to be tied to the architecture of the CMS. A good example of this is NC-CMS (http://www.nconsulting.ca/nc-cms/). All it needs is an include at the top, then wherever editable content is desired it's only a function call with a unique label. It's also perfect in the sense that it allows to differentiate between small strings (like titles, labels) and texts (which require a rich-text editor). It's the only CMS I found that fits this description, but it is a little too light as it does not handle site structure. I need to be able to allow my client to add pages, choosing an existing template for the layout. A minimal back-end is required. Wordpress also fits some requirements in that it handles only content editing and allows freedom for the themes by letting them call the content where and how they want it. But it is article-based and backwards, in that it embeds sites (as themes) within its structure, rather than being embeddable in sites like NC. It's funny how checking out all the CMS out there, almost all of them claim that most CMS are not self-sufficient, that they do not handle application logic, while (almost) every single on I found with only one exception do so. Many are mostly article-based blog engines, which does not fit my need. I would appreciate any CMS that fits the general description.

    Read the article

  • Creating ODT and PDF files as end result

    - by Bill Zimmerman
    Hello, I've been working on an app to create various document formats for a while now, and I've had limited success. Ideally, I'd like to dynamically create a fairly simple ODT/PDF/DOC file. I've been focusing my efforts on ODT, because it is editable, and open enough that there are several tools which will convert it to any of the other formats I need. The problem is that the ODT XML files are NOT simple, and there aren't any good-quality API's I could find (especially in python). So far, I've had the most success creating a template ODT file, and then manipulating the DOM in python as needed. This is ok generally, but is quickly becoming inadequate and requires too much tweaking every single time I need to alter one of the templates. The requirements are: 1) Produce a simple document that will have lists, paragraphs, and the ability to draw simple graphics on the page (boxes, circles, etc...) 2) The ability to specify page size, and the different formats should generally print the exact same output when sent to a printer My questions: 1) Are there any other ways I can produce ODT/PDF/DOC files? 2) Would LaTeX be acceptable? I've never really used it, does anyone have experience converting LaTeX files into other formats? 3) Would it be possible to use HTML? There are a lot of converters online. Technically you can specify dimensions in mm/cm, etc..., but I am worried that the printed output will differ between browsers/converters.... Any other ideas?

    Read the article

  • database setup for web application

    - by vbNewbie
    I have an application that requires a database and I have already setup tables but not sure if they match the requirements of the app. The app is a crawler which fetches web urls, crawls and stores appropriate urls and posts and all this is based on client requests which are stored as projects. So for each url stored there is one post and for client there are many projects and for each project there are many types of requests. So we get a client with a request and assign them a project name and then use the request to search for content and store the url and post. A request could already exist and should not be duplicated but should be associated with the right client and project and post etc. Here is my schema now: url table: urlId PK queryId FK url post table: postId PK urlId FK post date request table: queryId PK request client table: clientId PK client Name projectId FK project table: projectID PK queryID FK project Does this look right? or does anyone have suggestions. Of course my stored procedures and insert statements will have to be in depth.

    Read the article

  • Sending and receiving IM messages via controller in Rails

    - by Grnbeagle
    Hi, I need a way to handle XMPP communication in my Rails app. My requirements are: Keep an instance of XMPP client running and logged in as one specific user (my bot user) Trigger an event from a controller to send a message and wait for a reply. The message is sent to another machine equipped with a bot so that the reply is supposed to be returned quickly. I installed xmpp4r and backgrounDrb similar to what's described here, but backgrounDrb seems to have evolved and I couldn't get it to wait for a reply. If it has to happen asynchronously, I am willing to use a server-push technology to notify the browser when the reply arrives. To give you a better idea, here are snippets of my code: (In controller) class ServicesController < ApplicationController layout 'simple' def index render :text => "index" end def show @my_service = Service.find(params[:id]) worker = MiddleMan.worker(:jabber_agent_worker) worker.send_request(:arg => {:jid => "someuser@someserver", :cmd => "help"}) render :text => "testing" end end (In worker script) require 'xmpp4r' require 'logger' class JabberAgentWorker < BackgrounDRb::MetaWorker set_worker_name :jabber_agent_worker def create(args = nil) jid = Jabber::JID.new('myagent@myserver') @client = Jabber::Client.new(jid) @client.connect @client.auth('pass') @client.send(Jabber::Presence.new.set_show(:chat).set_status('BackgrounDRb')) @client.add_message_callback do |message| logger.info("**** messaged received: #{message}") # never reaches here end end def send_request(args = nil) to_jid = Jabber::JID.new(args[:jid]) message = Jabber::Message::new(to_jid, args[:cmd]).set_type(:normal).set_id('1') @client.send(message) end end If anyone can tell me any of the following, I'd much appreciate it: issue with my backgrounDrb usage other background process alternatives appropriate for XMPP interactions other ways of achieving this Thanks in advance.

    Read the article

  • Hierarchical/Nested Database Structure for Comments

    - by Stephen Melrose
    Hi, I'm trying to figure out the best approach for a database schema for comments. The problem I'm having is that the comments system will need to allow nested/hierarchical comments, and I'm not sure how to design this out properly. My requirements are, Comments can be made on comments, so I need to store the tree hierarchy I need to be able to query the comments in the tree hierarchy order, but efficiently, preferably in a fast single query, but I don't know if this is possible I'd need to make some wierd queries, e.g. pull out the latest 5 root comments, and a maximum of 3 children for each one of those I read an article on the MySQL website on this very subject, http://dev.mysql.com/tech-resources/articles/hierarchical-data.html The "Nested Set Model" in theory sounds like it will do what I need, except I'm worried about querying the thing, and also inserting. If this is the right approach, How would I do my 3rd requirement above? If I have 2000 comments, and I add a new sub-comment on the first comment, that will be a LOT of updating to do. This doesn't seem right to me? Or is there a better approach for the type of data I'm wanting to store and query? Thank you

    Read the article

  • A commercial software but open and free for personal/edu. How to license?

    - by Ivan
    I am developing a software to sell for business use but am willing to make it free and open-source for personal and educational use. Actually I can see the flowing requirements I would like the license to set: Personal and educational usage of the program and its source codes is to be free. In case of publishing of derivative works the original work and author (me) must be mentioned (incl. textual link to my website in a not-very-far-hidden place) and the derivative work must have different name. A derivative work can be closed-source. In every case of commercial (when the end-user is a commercial body (as a company (expect of non-profit organizations), an individual entrepreneur or government office)) usage of my work or any of derivative works made by anyone, the end-user, service provider or the derivative author must buy a commercial license from me. I mean no guarantees or responsibilities, whether expressed or implied... (except the case when one explicitly purchases a support service contract from me and the particular contract specifies a responsibility). Is there a known common license for this case? As far as I can see now it can not be OSI-approved as it does not comply to the §6. of OSI definition of open source. But there still can be an a common known reusable license for this case as it looks quite natural, I think.

    Read the article

  • Returning object from function

    - by brainydexter
    I am really confused now on how and which method to use to return object from a function. I want some feedback on the solutions for the given requirements. Scenario A: The returned object is to be stored in a variable which need not be modified during its lifetime. Thus, const Foo SomeClass::GetFoo() { return Foo(); } invoked as: someMethod() { const Foo& l_Foo = someClassPInstance->GetFoo(); //... } Scneraio B: The returned object is to be stored in a variable which will be modified during its lifetime. Thus, void SomeClass::GetFoo(Foo& a_Foo_ref) { a_Foo_ref = Foo(); } invoked as: someMethod() { Foo l_Foo; someClassPInstance-GetFoo(l_Foo); //... } I have one question here: Lets say that Foo cannot have a default constructor. Then how would you deal with that in this situation, since we cant write this anymore: Foo l_Foo Scenario C: Foo SomeClass::GetFoo() { return Foo(); } invoked as: someMethod() { Foo l_Foo = someClassPInstance->GetFoo(); //... } I think this is not the recommended approach since it would incur constructing extra temporaries. What do you think ? Also, do you recommend a better way to handle this instead ?

    Read the article

  • Modelling deterministic and nondeterministic data separately

    - by Superstringcheese
    I'm working with the Microsoft ADO.NET Entity Framework for a game project. Following the advice of other posters on SO, I'm considering modelling deterministic and nondeterministic data separately. The idea for this came from a discussion on multiplayer games, but it seemed to make sense in a single-player scenario as well. Deterministic (things that aren't going to change during gameplay) Attributes (Strength, Agility, etc.) and their descriptions Skills and their descriptions and requirements Races, Factions, Equipment, etc. Base Attribute/Skill/Equipment loadouts for monsters Nondeterministic (things that will change a lot during gameplay) Beings' current AttributeModifers (Potion of Might = +10 Strength), current health and mana, etc. Player inventory, cash, experience, level Player quests states Player FactionRelationships ...and so on. My deterministic model would serve as a set of constants. My nondeterministic model would provide my on-the-fly operable data and would be serialized to a savegame file to maintain game state between play sessions. The data store will be an embedded SQL Compact database. So I might want to create relations between my Attributes table (deterministic model) and my BeingAttributeModifiers table (nondeterministic model), but how do I set that up across models? Det model/db Nondet model/db ____________ ________________________ |Attributes | |PlayerAttributeModifiers| |------------| |------------------------| |Id | |Id | |Name | |AttributeId | |Description | |SourceId | ------------ |Value | ------------------------ Should I use two separate models (edmx) that transact with a single database containing both deterministic-type and nondeterministic-type tables? Or should/can I use two separate databases in one model? Or two models each with their own database? With distinct models/dbs it seems like this will get really complicated and I'll end up fighting EF a lot, rolling my own transaction code, and generally losing out on a lot of the advantages of the framework. I know these are vague questions, I'm just looking for a sanity check before I forge ahead any further.

    Read the article

  • best practice - loging events (general) and changes (database)

    - by b0x0rz
    need help with logging all activities on a site as well as database changes. requirements: * should be in database * should be easily searchable by initiator (user name / session id), event (activity type) and event parameters i can think of a database design but either it involves a lot of tables (one per event) so i can log each of the parameters of an event in a separate field OR it involves one table with generic fields (7 int numeric and 7 text types) and log everything in one table with event type field determining what parameter got written where (and hoping that i don't need more than 7 fields of a certain type, or 8 or 9 or whatever number i choose)... example of entries (the usual things): [username] login failed @datetime [username] login successful @datetime [username] changed password @datetime, estimated security of password [low/ok/high/perfect] @datetime [username] clicked result [result number] [result id] after searching for [search string] and got [number of results] @datetime [username] clicked result [result number] [result id] after searching for [search string] and got [number of results] @datetime [username] changed profile name from [old name] to [new name] @datetime [username] verified name with [credit card type] credit card @datetime datbase table [table name] purged of old entries @datetime etc... so anyone dealt with this before? any best practices / links you can share? i've seen it done with the generic solution mentioned above, but somehow that goes against what i learned from database design, but as you can see the sheer number of events that need to be trackable (each user will be able to see this info) is giving me headaches, BUT i do LOVE the one event per table solution more than the generic one. any thoughts? edit: also, is there maybe an authoritative list of such (likely) events somewhere? thnx stack overflow says: the question you're asking appears subjective and is likely to be closed. my answer: probably is subjective, but it is directly related to my issue i have with designing a database / writing my code, so i'd welcome any help. also i tried narrowing down the ideas to 2 so hopefully one of these will prevail, unless there already is an established solution for these kinds of things.

    Read the article

  • Lookup table size reduction

    - by Ryan
    Hello: I have an application in which I have to store a couple of millions of integers, I have to store them in a Look up table, obviously I cannot store such amount of data in memory and in my requirements I am very limited I have to store the data in an embebedded system so I am very limited in the space, so I would like to ask you about recommended methods that I can use for the reduction of the look up table. I cannot use function approximation such as neural networks, the values needs to be in a table. The range of the integers is not known at the moment. When I say integers I mean a 32 bit value. Basically the idea is use some copmpression method to reduce the amount of memory but without losing many precision. This thing needs to run in hardware so the computation overhead cannot be very high. In my algorithm I have to access to one value of the table do some operations with it and after update the value. In the end what I should have is a function which I pass an index to it and then I get a value, and after I have to use another function to write a value in the table. I found one called tile coding http://www.cs.ualberta.ca/~sutton/book/8/node6.html, this one is based on several look up tables, does anyone know any other method?. Thanks.

    Read the article

  • Wrappers of primitive types in arraylist vs arrays

    - by ismail marmoush
    Hi, In "Core java 1" I've read CAUTION: An ArrayList is far less efficient than an int[] array because each value is separately wrapped inside an object. You would only want to use this construct for small collections when programmer convenience is more important than efficiency. But in my software I've already used Arraylist instead of normal arrays due to some requirements, though "The software is supposed to have high performance and after I've read the quoted text I started to panic!" one thing I can change is changing double variables to Double so as to prevent auto boxing and I don't know if that is worth it or not, in next sample algorithm public void multiply(final double val) { final int rows = getSize1(); final int cols = getSize2(); for (int i = 0; i < rows; i++) { for (int j = 0; j < cols; j++) { this.get(i).set(j, this.get(i).get(j) * val); } } } My question is does changing double to Double makes a difference ? or that's a micro optimizing that won't affect anything ? keep in mind I might be using large matrices.2nd Should I consider redesigning the whole program again ?

    Read the article

< Previous Page | 201 202 203 204 205 206 207 208 209 210 211 212  | Next Page >