Search Results

Search found 20869 results on 835 pages for 'things i hate'.

Page 154/835 | < Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >

  • is ksplice production ready?

    - by faultyserver
    I would be interested to hear the serverfault community's experiences with Ksplice in production. Quick blurb from wikipedia: Ksplice is a free and open source extension of the Linux kernel which allows system administrators to apply security patches to a running kernel without having to reboot the operating system. and Ksplice can, without restarting the kernel, apply any source code patch that only needs to modify the kernel code. Unlike other hot update systems, Ksplice takes as input only a unified diff and the original kernel source code, and it updates the running kernel correctly, with no further human assistance required. Additionally, taking advantage of Ksplice does not require any preparation before the system is originally booted (the running kernel does not need to have been specially compiled, for example). In order to generate an update, Ksplice must determine what code within the kernel has been changed by the source code patch. So a few questions: How has the stability been? any odd issues that you have encountered with its 'rebootless live patching' of the kernel? Kernel panics or horror stories? I have been running it on a few test systems and so far its been working as advertised, but I am interested in what other sysadmins experiences have been with Ksplice before going 'all in' and deploying this on our production servers. So, anybody using Kspice in production? update: hmm, not seeing any real activity on this question after a couple of hours (besides some kind upvotes and favs). Maybe to spark some activity I'll also ask a few more questions and see if we can get this discussion going... "If you are aware of Ksplice, is there a reason you are not using it?" "Do you feel its still too bleeding edge, unproven or untested?" "Does Ksplice not fit well within your current patch-management system?" "Do you hate having systems that have long (and secure) uptimes?" ;-)

    Read the article

  • Is there a standard for storing normalized phone numbers in a database?

    - by Eric Z Beard
    What is a good data structure for storing phone numbers in database fields? I'm looking for something that is flexible enough to handle international numbers, and also something that allows the various parts of the number to be queried efficiently. [Edit] Just to clarify the use case here: I currently store numbers in a single varchar field, and I leave them just as the customer entered them. Then, when the number is needed by code, I normalize it. The problem is that if I want to query a few million rows to find matching phone numbers, it involves a function, like where dbo.f_normalizenum(num1) = dbo.f_normalizenum(num2) which is terribly inefficient. Also queries that are looking for things like the area code become extremely tricky when it's just a single varchar field. [Edit] People have made lots of good suggestions here, thanks! As an update, here is what I'm doing now: I still store numbers exactly as they were entered, in a varchar field, but instead of normalizing things at query time, I have a trigger that does all that work as records are inserted or updated. So I have ints or bigints for any parts that I need to query, and those fields are indexed to make queries run faster.

    Read the article

  • Is JavaEE really portable?

    - by Bozho
    I'm just implementing a JavaEE assignment I was given on an interview. I have some prior experience with EJB, but nothing related to JMS and MDBs. So here's what I find through the numerous examples: application servers bind their topics and queues to different JNDI names - for example topic/queue, jms the activationConfig property is required on JBoss, while in the Sun tutorial it is not. after starting my application, jboss warns me that my topic isn't bound (it isn't actually - I haven't bound it, but I expect it to be bound automatically - in fact, in an example for JBoss 4.0 automatic binding does seem to happen). A suggested solution is to map it in some jboss files or even use jboss-specific annotations. This might be just JBoss, but since it is certified to implement to spec, it appears the spec doesn't specify these these things. And there all the alleged portability vanishes. So I wonder - how come it is claimed that JavaEE is portable and you can take an ear and deploy it on another application server and it magically runs, if such extremely basic things don't appear to be portable at all. P.S. sorry for the rant, but I'm assume I might be doing/getting something wrong, so state your opinions.

    Read the article

  • Need guidelines for optimizing WebGL performance by minimizing shader changes

    - by brainjam
    I'm trying to get an idea of the practicality of WebGL for rendering large architectural interior scenes, consisting of 100K's of triangles. These triangles are distributed over many objects, and there are many materials in the scene. On the other hand, there are no moving parts. And the materials tend to be fairly simple, mostly based on texture maps. There is a lot of texture map sharing .. for example all the chairs in scene will share a common map. There is also some multitexturing - up to three textures overlaid in a material. I've been doing a little experimentation and reading, and gather that frequently switching materials during a rendering pass will slow things down. For example, a scene with 200K triangles will have significant performance differences, depending on whether there are 10 or 1000 objects, assuming that each time an object is displayed a new material is set up. So it seems that if performance is important the scene should be sorted by materials so as to minimize material switching. What I'm looking for is guidelines on how to think of the overhead of various state changes, and where do I get the biggest bang for the buck. For example, what are the relative performance costs of, say, gl.useProgram(), gl.uniformMatrix4fv(), gl.drawElements() should I try to write ubershaders to minimize shader switching? should I try to aggregate geometry to minimize the number of gl.drawElements() calls I realize that mileage may vary depending on browser, OS, and graphics hardware. And I'm also not looking for heroic measures. Just some guidelines from people who have already had some experience in making scenes fast. I'll add that while I've had some experience with fixed-pipeline OpenGL programming in the past, I'm rather new to the WebGL/OpenGL ES 2.0 way of doing things.

    Read the article

  • How can I create objects based on dump file memory in a WinDbg extension?

    - by pj4533
    I work on a large application, and frequently use WinDbg to diagnose issues based on a DMP file from a customer. I have written a few small extensions for WinDbg that have proved very useful for pulling bits of information out of DMP files. In my extension code I find myself dereferencing c++ class objects in the same way, over and over, by hand. For example: Address = GetExpression("somemodule!somesymbol"); ReadMemory(Address, &addressOfPtr, sizeof(addressOfPtr), &cb); // get the actual address ReadMemory(addressOfObj, &addressOfObj, sizeof(addressOfObj), &cb); ULONG offset; ULONG addressOfField; GetFieldOffset("somemodule!somesymbolclass", "somefield", &offset); ReadMemory(addressOfObj+offset, &addressOfField, sizeof(addressOfField), &cb); That works well, but as I have written more extensions, with greater functionality (and accessing more complicated objects in our applications DMP files), I have longed for a better solution. I have access to the source of our own application of course, so I figure there should be a way to copy an object out of a DMP file and use that memory to create an actual object in the debugger extension that I can call functions on (by linking in dlls from our application). This would save me the trouble of pulling things out of the DMP by hand. Is this even possible? I tried obvious things like creating a new object in the extension, then overwriting it with a big ReadMemory directly from the DMP file. This seemed to put the data in the right fields, but freaked out when I tried to call a function. I figure I am missing something...maybe c++ pulls some vtable funky-ness that I don't know about? My code looks similar to this: SomeClass* thisClass = SomeClass::New(); ReadMemory(addressOfObj, &(*thisClass), sizeof(*thisClass), &cb);

    Read the article

  • What does the Excel VBA range.Rows property really do?

    - by RBarryYoung
    OK, I am finishing up an add-on project for a legacy Excel-VBA application, and I have once again run up against the conundrum of the mysterious range.Rows(?) and worksheet.Rows properties. Does anyone know what these properties really do and what they are supposed to provide to me? (note: all of this probably applies to the corresponding *.Columns properties also). What I would really like to be able to use it for is to return a range of rows, like this: SET rng = wks.Rows(iStartRow, iEndRow) But I have never been able to get it to do that, even though the Intellisense shows two arguments for it. Instead I have to use one of the two or three other (very kludgy) techniques. The help is very unhelpful (typically so for Office VBA), and googling for "Rows" is not very useful, no matter how many other terms I add to it. The only things that I have been able to use it for are 1) return a single row as a range ( rng.Rows(i) ) and 2) return a count of the rows in a range ( rng.Rows.Count ). Is that it? Is there really nothing else that it's good for? Clarification: I know that it returns a range and that there are other ways to get a range of rows. What I am asking for is specifically what do we get from .Rows() that we do not already get from .Cells() and .Range()? The two things that I know are 1) an easier way to return a range of a single row and 2) a way to count the number of rows in a range. Is there anything else?

    Read the article

  • Can I stop the dbml designer from adding a connection string to the dbml file?

    - by drs9222
    We have a custom function AppSettings.GetConnectionString() which is always called to determine the connection string that should be used. How this function works is unimportant to the discussion. It suffices to say that it returns a connection string and I have to use it. I want my LINQ to SQL DataContext to use this so I removed all connection string informatin from the dbml file and created a partial class with a default constructor like this: public partial class SampleDataContext { public SampleDataContext() : base(AppSettings.GetConnectionString()) { } } This works fine until I use the designer to drag and drop a table into the diagram. The act of dragging a table into the diagram will do several unwanted things: A settings file will be created A app.config file will be created My dbml file will have the connection string embedded in it All of this is done before I even save the file! When I save the diagram the designer file is recreated and it will contain its own default constructor which uses the wrong connection string. Of course this means my DataContext now has two default constructors and I can't build anymore! I can undo all of these bad things but it is annoying. I have to manually remove the connection string and the new files after each change! Is there anyway I can stop the designer from making these changes without asking? EDIT The requirement to use the AppSettings.GetConnectionString() method was imposed on me rather late in the game. I used to use something very similar to what it generates for me. There are quite a few places that call the default constructor. I am aware that change them all to create the data context in another way (using a different constructor, static method, factory, ect..). That kind of change would only be slightly annoying since it would only have to be done once. However, I feel, that it is sidestepping the real issue. The dbml file and configuration files would still contain an incorrect, if unused, connection string which at best could confuse other developers.

    Read the article

  • Single developer, project organization

    - by poke
    I am looking for a good (and free) way to organize some of my personal projects. I am saying "organize" because I'm not sure, if the standard project management software solutions are exactly what I am looking for and especially something what I, as a single developer, need. In general, I just want to keep my progress of my projects organized in some way. I would like to be able to keep track of milestones and split those into multiple smaller tasks, so I can keep track of my progress. So some task/issue based system would probably be good, especially as I also want to keep track of issues/bugs with specific versions (although I alone will create those issues). I am and will be the only developer on those projects, so it doesn't matter if the software is offline or online, and I also don't need any collaboration features (like commenting on things, or assigning tasks to other developers etc). But if there is a good software that fits my needs, and in addition it has those things, I don't really care. After all it's easy enough to not use available features. Many online solutions also offer integrated code hosting. I am using git internally, but I don't plan to push any of the code, so such a feature is not needed either. In case of online solutions however I would like the projects to be closed to the public (some of the online utilities only offer open source projects for free and require payments for private projects). I have looked at some project management solutions already, I also read some similar questions here on SO. But given that I'm a single developer, my focus is probably a bit different as when others ask for a huge distributed software that supports many developers and different collaboration features. Some standard answers such as Trac (which also only supports one project), Redmine and FogBUGZ look interesting, but are a bit off my interest (although you may change my mind on that :P). Currently, I'm looking at Indefero which doesn't look too bad. But what do you think?

    Read the article

  • Database for Python Twisted

    - by Will
    There's an API for Twisted apps to talk to a database in a scalable way: twisted.enterprise.dbapi The confusing thing is, which database to pick? The database will have a Twisted app that is mostly making inserts and updates and relatively few selects, and then other strictly-read-only clients that are accessing the database directly making selects. (The read-only users are not necessarily selecting the data that the Twisted app is inserting; its not as though the database is being used as a message-queue) My understanding - which I'd like corrected/adviced - is that: Postgres is a great DB, but all the Python bindings - and there is a confusing maze of them - are abandonware There is psycopg2, but that makes a lot of noise about doing its own connection-pooling and things; does this co-exist gracefully/usefully/transparently with the Twisted async database connection pooling and such? SQLLite is a great database for little things but if used in a multi-user way it does whole-database locking, so performance would suck in the usage pattern I envisage MySQL - after the Oracle takeover, who'd want to adopt it now or adopt a fork? Is there anything else out there?

    Read the article

  • Using different numeric variable types

    - by DataPimp
    Im still pretty new so bear with me on this one, my question(s) are not meant to be argumentative or petty but during some reading something struck me as odd. Im under the assumption that when computers were slow and memory was expensive using the correct variable type was much more of a necessity than it is today. Now that memory is a bit easier to come by people seem to have relaxed a bit. For example, you see this sample code everywhere: for (int i = 0; i < length; i++) int? (-2,147,483,648 to 2,147,483,648) for length? Isnt byte (0-255) a better choice? So Im curious of your opinion and what you believe to be best practice, I hate to think this would be used only because the acronym "int" is more intuitive for a beginner...or has memory just become so cheap that we really dont need to concern ourselves with such petty things and therefore we should just use long so we can be sure any other numbers/types(within reason) used can be cast automagically? ...or am Im just being silly by concerning myself with such things?

    Read the article

  • Handling learning curve for new developers

    - by pete the pagan-gerbil
    Our company likes to hire new developers, with no experience. We have a core set of skills that we try to get them up to speed with, like ASP.NET and WinForms - to teach basic programming, the .NET languages, and the things they'll need to maintain and write. We also try and mentor them through early projects, so they can learn from someone more experienced. Recently, we've been seeing the benefits of new frameworks like MVC and ideas like Unit Testing and TDD (by extension, dependancy injection and IoC), and we'd like to start using these in the team. However, this increases the time that a junior would have before they can get started on a new project - because doing something like unit tests wrong could cause major headaches months or years later in maintenance, especially if we believe unit tests to be comprehensive. How do you handle the huge amount of things that a junior will need to take on, acknowledging that the business wants them working independantly as soon as possible? Is it acceptable to tell them not to unit test till a while after they are independant (and give them small, simpler projects in the meantime) before taking them to 'level 2' of the core skills?

    Read the article

  • Blackberry application works in simulator but not device

    - by Kai
    I read some of the similar posts on this site that deal with what seems to be the same issue and the responses didn't really seem to clarify things for me. My application works fine in the simulator. I believe I'm on Bold 9000 with OS 4.6. The app is signed. My app makes an HTTP call via 3G to fetch an XML result. type is application/xhtml+xml. In the device, it gives no error. it makes no visual sign of error. I tell the try catch to print the results to the screen and I get nothing. HttpConnection was taken right out of the demos and works fine in sim. Since it gives no error, I begin to reflect back on things I recall reading back when the project began. deviceside=true? Something like that? My request is simply HttpConnection connection = (HttpConnection)Connector.open(url); where url is just a standard url, no get vars. Based on the amount of time I see the connection arrows in the corner of the screen, I assume the app is launching the initial communication to my server, then either getting a bad result, or it gets results and the persistent store is not functioning as expected. I have no idea where to begin with this. Posting code would be ridiculous since it would be basically my whole app. I guess my question is if anyone knows of any major differences with device versus simulator that could cause something like http connection or persistent store to fail? A build setting? An OS restriction? Any standard procedure I may have just not known about that everyone should do before beginning device testing? Thanks

    Read the article

  • Switching from form_for to remote_form_for problems with submit changes in Rails

    - by Matthias Günther
    Hi there, another day with Rails and today I want to use Ajax. linkt_remote_link for changing a text was pretty easy so I thought it would also be easy to switch my form_for loop just to an ajax request form with remote_form_for, but the problem with the remote_form_for is that it doesn't save my changes? Here the code that worked: <% form_for bill, :url => {:action => 'update', :id => bill.id} do |f| %> # make the processing e.g. displaying texfields and so on <%= submit_tag 'speichern'%> It produces the following html code: <form action="/adminbill/update/58" class="edit_bill" id="edit_bill_58" method="post"><div style="margin:0;padding:0;display:inline"><input name="_method" type="hidden" value="put" /></div> <!-- here the html things for the forms --> <input class="button" name="commit" type="submit" value="speichern" /> Here the code which don't save me the changes and submit them: <% remote_form_for bill, :url => {:action => 'update', :id => bill.id} do |f| %> # make the processing e.g. displaying texfields and so on <%= submit_tag 'speichern'%> It produces the following html code: <form action="/adminbill/update/58" class="edit_bill" id="edit_bill_58" method="post" onsubmit="$.ajax({data:$.param($(this).serializeArray()), dataType:'script', type:'post', url:'/adminbill/update/58'}); return false;"><div style="margin:0;padding:0;display:inline"><input name="_method" type="hidden" value="put" /></div> <!-- here the html things for the forms --> <input class="button" name="commit" type="submit" value="speichern" /> I don't know if I have to consider something special when using remote_form_for (see remote_form_for)

    Read the article

  • Single Responsibility Principle vs Anemic Domain Model anti-pattern

    - by Niall Connaughton
    I'm in a project that takes the Single Responsibility Principle pretty seriously. We have a lot of small classes and things are quite simple. However, we have an anemic domain model - there is no behaviour in any of our model classes, they are just property bags. This isn't a complaint about our design - it actually seems to work quite well During design reviews, SRP is brought out whenever new behaviour is added to the system, and so new behaviour typically ends up in a new class. This keeps things very easily unit testable, but I am perplexed sometimes because it feels like pulling behaviour out of the place where it's relevant. I'm trying to improve my understanding of how to apply SRP properly. It seems to me that SRP is in opposition to adding business modelling behaviour that shares the same context to one object, because the object inevitably ends up either doing more than one related thing, or doing one thing but knowing multiple business rules that change the shape of its outputs. If that is so, then it feels like the end result is an Anemic Domain Model, which is certainly the case in our project. Yet the Anemic Domain Model is an anti-pattern. Can these two ideas coexist? EDIT: A couple of context related links: SRP - http://www.objectmentor.com/resources/articles/srp.pdf Anemic Domain Model - http://martinfowler.com/bliki/AnemicDomainModel.html I'm not the kind of developer who just likes to find a prophet and follow what they say as gospel. So I don't provide links to these as a way of stating "these are the rules", just as a source of definition of the two concepts.

    Read the article

  • Database frontend for multiple db engines

    - by xeroxed_yeti
    Hey Stackoverflow, yeah it's spring and a lot of things happens to me... Also changing some software things at my computer, because suddenly everything seems to be boring after starting my laptop. I even changed my wallpaper!!! Besides I'm looking for a new database frontend and after using google with serveral queries I didn't find the right software. You have to know, my laptop and me are very very special :) I'm looking for a database frontend which should have following features can access PostgreSQL and MySQL databases can handle schemata overs a nice sql query tool supports an import and export functionality (something like tab separated text files) it for free looks awesome - every time when a college come to my office he must get the feeling: oh boy, this man really knows his job and should get more money! At the moment I used phpmyadmin, phppgadmin, pgadminIII, mysqladmin and dbVisualizer. Furthermore I was a big fan of the aqua datastudio until it became commercial. This tools offers a great variety of functionalities which can simplify programmes live. However, now you have to buy a license...I'm a scientist and money for software is limited =) So it's my first time (question) here at stackoverflow please be cheerful :)

    Read the article

  • Does Antivirus2009 or Antivirus360 automatically install on your computer and if so how?

    - by sergey
    I run Firefox on Vista, and unfortunately I got tricked (through a deceptive google result) into going to a page containing one of those fake "Your Computer Has all of this Spyware on it!" pages. I tried manually closing the tab, but it had a "Are you sure you want to navigate away" JavaScript alerts (HATE THOSE). So I clicked "OK," and the tab closed. Then I closed firefox altogether and rebooted. Now, before I could close the tab, it did prompt me to download a file, but of course I choose not to, and checking my downloads folder, nothing new is there. Also, even if I ?did? download it, ?I? would still have to choose to run it by double clicking on it for it to install itself, right? Also, I ran Malware Bites and Windows Defender and both said everything was fine. From this I would normally believe I am safe, but I have read everywhere that this thing "automatically installs" itself and that it is a bitch to get rid of. Is it really possible for this thing to dig in if you are running firefox and didn't choose to download it or run it after downloading?

    Read the article

  • SQL Design: representing a default value with overrides?

    - by Mark Harrison
    I need a sparse table which contains a set of "override" values for another table. I also need to specify the default value for the items overridden. For example, if the default value is 17, then foo,bar,baz will have the values 17,21,17: table "things" table "xvalue" name stuff name xval ---- ----- ---- ---- foo ... bar 21 bar ... baz ... If I don't care about a FK from xvalue.name - things.name, I could simply put a "DEFAULT" name: table "xvalue" name xval ---- ---- DEFAULT 17 bar 21 But I like having a FK. I could have a separate default table, but it seems odd to have 2x the number of tables. table "xvalue_default" xval ---- 17 table "xvalue" name xval ---- ---- bar 21 I could have a "defaults table" tablename attributename defaultvalue xvalue xval 17 but then I run into type issues on defaultvalue. My operations guys prefer as compact a representation as possible, so they can most easily see the "diff" or deviations from the default. What's the best way to represent this, including the default value? This will be for Oracle 10.2 if that makes a difference.

    Read the article

  • Are MEF's ComposableParts contracts instance-based?

    - by Dave
    I didn't really know how to phrase the title of my questions, so my apologies in advance. I read through parts of the MEF documentation to try to find the answer to my question, but couldn't find it. I'm using ImportMany to allow MEF to create multiple instances of a specific plugin. That plugin Imports several parts, and within calls to a specific instance, it wants these Imports to be singletons. However, what I don't want is for all instances of this plugin to use the same singleton. For example, let's say my application ImportManys Blender appliances. Every time I ask for one, I want a different Blender. However, each Blender Imports a ControlPanel. I want each Blender to have its own ControlPanel. To make things a little more interesting, each Blender can load BlendPrograms, which are also contained within their own assemblies, and MEF takes care of this loading. A BlendProgram might need to access the ControlPanel to get the speed, but I want to ensure that it is accessing the correct ControlPanel (i.e. the one that is associated with the Blender that is associated with the program!) This diagram might clear things up a little bit: As the note shows, I believe that the confusion could come from an inherently-poor design. The BlendProgram shouldn't touch the ControlPanel directly, and instead perhaps the BlendProgram should get the speed via the Blender, which will then delegate the request to its ControlPanel. If this is the case, then I assume the BlendProgram needs to have a reference to a specific Blender. In order to do this, is the right way to leverage MEF and use an ImportingConstructor for BlendProgram, i.e. [ImportingConstructor] public class BlendProgram : IBlendProgram { public BlendProgram( Blender blender) {} } And if this is the case, how do I know that MEF will use the intended Blender plugin?

    Read the article

  • Analyzing Web Application Speed

    - by Amy
    I'm a bit confused because the logical/programmer brain in me says that if all things are constant, the speed of a function must be constant. I am working on a PHP web application with jqGrid as a front end for showing the data. I am testing on my personal computer, so network traffic does not apply. I make an HTTP request to a PHP function, it returns the data, and then jqGrid renders it. What has me befuddled is that sometimes, Firebug reports that this is taking 300-600 milliseconds sometimes, and sometimes, it's taking 3.68 seconds. I can run the request over and over again, with very radically different response times. The query is the same. The number of users on the system is the same. No network latency. Same code. I'm not running other applications on the computer while testing. I could understand query caching improving performance on subsequent requests, but the speed is just fluctuating wildly with no rhyme or reason. So, my question is, what else can cause such variability in the response time? How can I determine what's doing it? More importantly, is there any way to get things more consistent?

    Read the article

  • Looking for Remote Control that works with everything (even Windows 7 Media Center)

    - by T Reddy
    Using my Google-Fu, it seems that the most basic of things one gets with any DVR is the remote control. Had I known it would be difficult just to get a consumer IR receiver for Windows 7 I may not have bothered to build an HTPC. But too late, I already have the HTPC ready to go (minus the CETON card...) So I'm moving away from TiVo, I hate paying the monthly fees and my box is ancient. I'm looking for these solutions to my HTCP setup...I want to: Switch audio from HDMI to SPDIF via the remote control (i.e., switch from TV to Receiver) (as a side note, the built-in audio on the mobo has software to do this). Pressing the volume button on the remote will always change the TV's volume (or the Receiver's if possible) and NOT the PC's volume. The remote/receiver works well around 25 feet. Bonus if the IR Receiver can work with my existing TiVo remote (or other remotes laying around the house) I read a review of the Bluetooth TiVo remote...it sounds promising...but I'm not sure if it is great for Windows 7 HTPC?

    Read the article

  • Modelling deterministic and nondeterministic data separately

    - by Superstringcheese
    I'm working with the Microsoft ADO.NET Entity Framework for a game project. Following the advice of other posters on SO, I'm considering modelling deterministic and nondeterministic data separately. The idea for this came from a discussion on multiplayer games, but it seemed to make sense in a single-player scenario as well. Deterministic (things that aren't going to change during gameplay) Attributes (Strength, Agility, etc.) and their descriptions Skills and their descriptions and requirements Races, Factions, Equipment, etc. Base Attribute/Skill/Equipment loadouts for monsters Nondeterministic (things that will change a lot during gameplay) Beings' current AttributeModifers (Potion of Might = +10 Strength), current health and mana, etc. Player inventory, cash, experience, level Player quests states Player FactionRelationships ...and so on. My deterministic model would serve as a set of constants. My nondeterministic model would provide my on-the-fly operable data and would be serialized to a savegame file to maintain game state between play sessions. The data store will be an embedded SQL Compact database. So I might want to create relations between my Attributes table (deterministic model) and my BeingAttributeModifiers table (nondeterministic model), but how do I set that up across models? Det model/db Nondet model/db ____________ ________________________ |Attributes | |PlayerAttributeModifiers| |------------| |------------------------| |Id | |Id | |Name | |AttributeId | |Description | |SourceId | ------------ |Value | ------------------------ Should I use two separate models (edmx) that transact with a single database containing both deterministic-type and nondeterministic-type tables? Or should/can I use two separate databases in one model? Or two models each with their own database? With distinct models/dbs it seems like this will get really complicated and I'll end up fighting EF a lot, rolling my own transaction code, and generally losing out on a lot of the advantages of the framework. I know these are vague questions, I'm just looking for a sanity check before I forge ahead any further.

    Read the article

  • RoR: Replace_html with partial and collection not functioning

    - by Jack
    I am trying to create a tabbed interface using the prototype helper method "replace_html." I have three different partials I am working with. The first one is the 'main tab' and it is loaded automatically like so: <div id = "grid"> <% things_today = things.find_things_today %> <%= render :partial => "/todaything", :collection => things_today, :as =>:thing %> </div> ...which works fine. Similarly, I have a _tomorrowthing partial which would replace the content in the 'grid' div like so: <%things_tomorrow = things.find_things_tomorrow%> <%= link_to_function('Tomorrow',nil, :id=>'tab') do |page| page.replace_html 'grid' , :partial => '/tomorrowthing',:collection => things_tomorrow, :as => :thing end %> If I click on this tab nothing happens at all. Using firebug, the only errors I find are a missing ) after argument list which is contained in the Element.update block where the link_to_function is called. What am I doing wrong?

    Read the article

  • Ultra-Portable Laptop or Tablet PC for Development and Sketching

    - by Nelson LaQuet
    I am a software developer that primarily writes in PHP, [X]HTML, CSS, Javascript, C# and C++. I use Eclipse for web development, Visual Studio 2008 for C++ and C# work, TortoiseSVN, Subversion server for local repositories, SQL Server Express, Apache and MYSQL. I also use Office 2007 for word processing and spreadsheets and use Vista Ultimate 64 as my primary operating system. The only other things I do on my laptop are watch movies, surf the internet and listen to music. I currently have a Acer Aspire 5100 (1.4 GHz AMD Turion X2, 2 GB of RAM and a 15.4" screen). This thing does not cut it in performance or portability, and in addition, my DVD drive failed. And before anybody posts about vista: I have had XP Professional 32 on it for the last two years, and recently upgraded to Vista 64. It is actually faster (with areo disabled) then XP; so it is not the OS that is causing the laptop to be slow. I usually sketch a lot, for explaining things, developing user interfaces and software architecture. Because of my requirements, I was thinking about a Lenovo X61 Tablet PC. It outperforms my current laptop, is significantly more portable, and... is a tablet. My question is: do any other software developers use this (or other tablets) for programming? Does it help to be able to sketch on the computer itself? And is it capable of being a good development machine? Will it handle the above software listed? If not, what is the best ultra-portable laptop that is good for programming? Or are ultra-portable laptops even good for programming? I could manage with my 15.4" screen, but am spoiled by my two 19" at my home desktop and my job's workstation.

    Read the article

  • What arguments to use to explain why a SQL DB is far better then a flat file

    - by jamone
    The higher ups in my company were told by good friends that flat files are the way to go, and we should switch from MS SQL server to them for everything we do. We have over 300 servers and hundreds of different databases. From just the few I'm involved with we have 10 billion records in quite a few of them with upwards of 100k new records a day and who knows how many updates... Me and a couple others need to come up with a response saying why we shouldn't do this. Most of our stuff is ASP.NET with some legacy ASP. We thought that making a simple console app that tests/times the same interactions between a flat file (stored on the network) and SQL over the network doing large inserts, searches, updates etc along with things like network disconnects randomly. This would show them how bad flat files can be espically when you are dealing with millions of records. What things should I use in my response? What should I do with my demo code to illustrate this? My sort list so far: Security Concurent access Performance with large ammounts of data Ammount of time to do such a massive rewrite/switch Lack of transactions PITA to map relational data to flat files I fear that this will be a great post on the Daily WTF someday if I can't stop it now.

    Read the article

  • Risking the exception anti-pattern.. with some modifications

    - by Sridhar Iyer
    Lets say that I have a library which runs 24x7 on certain machines. Even if the code is rock solid, a hardware fault can sooner or later trigger an exception. I would like to have some sort of failsafe in position for events like this. One approach would be to write wrapper functions that encapsulate each api a: returnCode=DEFAULT; try { returnCode=libraryAPI1(); } catch(...) { returnCode=BAD; } return returnCode; The caller of the library then restarts the whole thread, reinitializes the module if the returnCode is bad. Things CAN go horribly wrong. E.g. if the try block(or libraryAPI1()) had: func1(); char *x=malloc(1000); func2(); if func2() throws an exception, x will never be freed. On a similar vein, file corruption is a possible outcome. Could you please tell me what other things can possibly go wrong in this scenario?

    Read the article

< Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >