Search Results

Search found 36443 results on 1458 pages for 'table design'.

Page 139/1458 | < Previous Page | 135 136 137 138 139 140 141 142 143 144 145 146  | Next Page >

  • Performance impact when using XML columns in a table with MS SQL 2008

    - by Sam Dahan
    I am using a simple table with 6 columns, 3 of which are of XML type, not schema-constrained. When the table reaches a size around 120,000 or 150,000 rows, I see a dramatic performance cost in doing any query in the table. For comparison, I have another table, which grows in size at about the same rate, but only contain scalar types (int, datetime, a few float columns). That table performs perfectly fine even after 200,000 rows. And by the way, I am not using XQuery on the xml columns, i am only using regular SQL query statements. Some specifics: both tables contain a DateTime field called SampleTime. a statement like (it's in a stored procedure but I show you the actual statement) SELECT MAX(sampleTime) SampleTime FROM dbo.MyRecords WHERE PlacementID=@somenumber takes 0 seconds on the table without xml columns, and anything from 13 to 20 seconds on the table with XML columns. That depends on which drive I set my database on. At the moment it sits on a different spindle (not C:) and it takes 13 seconds. Has anyone seen this behavior before, or have any hint at what I am doing wrong? I tried this with SQL 2008 EXPRESS and the full-blown SQL Server 2008, that made no difference. Oh, one last detail: I am doing this from a C# application, .NET 3.5, using SqlConnection, SqlReader, etc.. I'd appreciate some insight into that, thanks! Sam

    Read the article

  • SQL: Order randomly when inserting objects to a table

    - by Ekaterina
    I have an UDF that selects top 6 objects from a table (with a union - code below) and inserts it into another table. (btw SQL 2005) So I paste the UDF below and what the code does is: selects objects for a specific city and add a level to those (from table Europe) union that selection with a selection from the same table for objects that are from the same country and add a level to those From the union, selection is made to get top 6 objects, order by level, so the objects from the same city will be first, and if there aren't any available, then objects from the same country will be returned from the selection. And my problem is, that I want to make a random selection to get random objects from table Europe, but because I insert the result of my selection into a table, I can't use order by newid() or rand() function because they are time-dependent, so I get the following errors: Invalid use of side-effecting or time-dependent operator in 'newid' within a function. Invalid use of side-effecting or time-dependent operator in 'rand' within a function. UDF: ALTER FUNCTION [dbo].[Objects] (@id uniqueidentifier) RETURNS @objects TABLE ( ObjectId uniqueidentifier NOT NULL, InternalId uniqueidentifier NOT NULL ) AS BEGIN declare @city varchar(50) declare @country int select @city = city, @country = country from Europe where internalId = @id insert @objects select @id, internalId from ( select distinct top 6 [level], internalId from ( select top 6 1 as [level], internalId from Europe N4 where N4.city = @city and N4.internalId != @id union select top 6 2 as [level], internalId from Europe N5 where N5.countryId = @country and N5.internalId != @id ) as selection_1 order by [level] ) as selection_2 return END If you have fresh ideas, please share them with me. (Just please, don't suggest to order by newid() or to add a column rand() with seed DateTime (by ms or sthg), because that won't work.)

    Read the article

  • MySQL forgot about automatically creating an index for a foreign key?

    - by bobo
    After running the following SQL statements, you will see that, MySQL has automatically created the non-unique index question_tag_tag_id_tag_id on the tag_id column for me after the first ALTER TABLE statement has run. But after the second ALTER TABLE statement has run, I think MySQL should also automatically create another non-unique index question_tag_question_id_question_id on the question_id column for me. But as you can see from the SHOW INDEXES statement output, it's not there. Why does MySQL forget about the second ALTER TABLE statement? By the way, since I have already created a unique index question_id_tag_id_idx used by both question_id and tag_id columns. Is creating a separate index for each of them redundant? mysql> DROP DATABASE mydatabase; Query OK, 1 row affected (0.00 sec) mysql> CREATE DATABASE mydatabase; Query OK, 1 row affected (0.00 sec) mysql> USE mydatabase; Database changed mysql> CREATE TABLE question (id BIGINT AUTO_INCREMENT, html TEXT, PRIMARY KEY(id)) ENGINE = INNODB; Query OK, 0 rows affected (0.05 sec) mysql> CREATE TABLE tag (id BIGINT AUTO_INCREMENT, name VARCHAR(10) NOT NULL, UNIQUE INDEX name_idx (name), PRIMARY KEY(id)) ENGINE = INNODB; Query OK, 0 rows affected (0.05 sec) mysql> CREATE TABLE question_tag (question_id BIGINT, tag_id BIGINT, UNIQUE INDEX question_id_tag_id_idx (question_id, tag_id), PRIMARY KEY(question_id, tag_id)) ENGINE = INNODB; Query OK, 0 rows affected (0.00 sec) mysql> ALTER TABLE question_tag ADD CONSTRAINT question_tag_tag_id_tag_id FOREIGN KEY (tag_id) REFERENCES tag(id); Query OK, 0 rows affected (0.10 sec) Records: 0 Duplicates: 0 Warnings: 0 mysql> ALTER TABLE question_tag ADD CONSTRAINT question_tag_question_id_question_id FOREIGN KEY (question_id) REFERENCES question(id); Query OK, 0 rows affected (0.13 sec) Records: 0 Duplicates: 0 Warnings: 0 mysql> SHOW INDEXES FROM question_tag; +--------------+------------+----------------------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+ | Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | +--------------+------------+----------------------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+ | question_tag | 0 | PRIMARY | 1 | question_id | A | 0 | NULL | NULL | | BTREE | | | question_tag | 0 | PRIMARY | 2 | tag_id | A | 0 | NULL | NULL | | BTREE | | | question_tag | 0 | question_id_tag_id_idx | 1 | question_id | A | 0 | NULL | NULL | | BTREE | | | question_tag | 0 | question_id_tag_id_idx | 2 | tag_id | A | 0 | NULL | NULL | | BTREE | | | question_tag | 1 | question_tag_tag_id_tag_id | 1 | tag_id | A | 0 | NULL | NULL | | BTREE | | +--------------+------------+----------------------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+ 5 rows in set (0.01 sec) mysql>

    Read the article

  • NHibernate / Fluent - Mapping multiple objects to single lookup table

    - by Al
    Hi all I am struggling a little in getting my mapping right. What I have is a single self joined table of look up values of certain types. Each lookup can have a parent, which can be of a different type. For simplicities sake lets take the Country and State example. So the lookup table would look like this: Lookups Id Key Value LookupType ParentId - self joining to Id base class public class Lookup : BaseEntity { public Lookup() {} public Lookup(string key, string value) { Key = key; Value = value; } public virtual Lookup Parent { get; set; } [DomainSignature] [NotNullNotEmpty] public virtual LookupType LookupType { get; set; } [NotNullNotEmpty] public virtual string Key { get; set; } [NotNullNotEmpty] public virtual string Value { get; set; } } The lookup map public class LookupMap : IAutoMappingOverride<DBLookup> { public void Override(AutoMapping<Lookup> map) { map.Table("Lookups"); map.References(x => x.Parent, "ParentId").ForeignKey("Id"); map.DiscriminateSubClassesOnColumn<string>("LookupType").CustomType(typeof(LookupType)); } } BASE SubClass map for subclasses public class BaseLookupMap : SubclassMap where T : DBLookup { protected BaseLookupMap() { } protected BaseLookupMap(LookupType lookupType) { DiscriminatorValue(lookupType); Table("Lookups"); } } Example subclass map public class StateMap : BaseLookupMap<State> { protected StateMap() : base(LookupType.State) { } } Now I've almost got my mappings set, however the mapping is still expecting a table-per-class setup, so is expecting a 'State' table to exist with a reference to the states Id in the Lookup table. I hope this makes sense. This doesn't seem like an uncommon approach when wanting to keep lookup-type values configurable. Thanks in advance. Al

    Read the article

  • Database indexes - what should they be

    - by WebweaverD
    Most of my database tables have a clear unique index through which lookups are done 90% of the time but I am a bit unsure on this one - I have a table which keeps track of user rating totals for items in my database, I now want to add another table, to track individual ratings with an ip address column to make sure no one can rate something twice. Since I can see this becoming a big, high use table it is important to optimize it correctly. (MYSQL table) This table will have the following fields: rating_id(always - unique), item_id (always - not unique), user_id (optional - not unique), ip_address (always - not unique), rating_value(always - not unique), has_review(bool) Now I envisions 90% the queries going something like this: When a user rates something - select where item_id = x and ip_address = y, (if rows = 0) insert rating When in user account pages - select where ip_address = x or username = y Now none of the fields searched on are unique, can I still use them as indexes (for example item _id and ip_address), can I have two indexes and will this still improve performance over a non indexed table?

    Read the article

  • Make a usable Join relationship with LINQ on top of a database CSV design error

    - by jdk
    I'm looking for a way to fix and/or abstract away a comma-separated values (CSV) list in a database field in order to reconstruct a usable relationship such that I can properly join the two tables below and query them using LINQ and its Join method. Following is a sample showing the Person table and CsvArticleIds field having a CSV value to represent a one-to-many association with Article records. TABLE [dbo].[Person] Id Name CsvArticleIds -- ---------- -------- 1 Joe "15,22" 5 Ed "22" 10 Arnie "8,15,22" ^^^(Of course a link table should have been created; nonetheless the relationship with articles is trapped inside that list of CSV values.) TABLE [dbo].[Article] Id Title -- ---------- 8 Beginning C# 15 A Historic look at Programming in the 90s 22 Gardening in January Additional Info the fix can be at any level: C#.NET or SQL Server something easy because I will be repeating the solution for many other CSV values in other tables. Elegant is nice too. not looking for efficiency because this is part of a one-time data migration task and can take as long as it wants to run.

    Read the article

  • Output columns not in destination table?

    - by lance
    SUMMARY: I need to use an OUTPUT clause on an INSERT statement to return columns which don't exist on the table into which I'm inserting. If I can avoid it, I don't want to add columns to the table to which I'm inserting. DETAILS: My FinishedDocument table has only one column. This is the table into which I'm inserting. FinishedDocument -- DocumentID My Document table has two columns. This is the table from which I need to return data. Document -- DocumentID -- Description The following inserts one row into FinishedDocument. Its OUTPUT clause returns the DocumentID which was inserted. This works, but it doesn't give me the Description of the inserted document. INSERT INTO FinishedDocument OUTPUT INSERTED.DocumentID SELECT DocumentID FROM Document WHERE DocumentID = @DocumentID I need to return from the Document table both the DocumentID and the Description of the matching document from the INSERT. What syntax do I need to pull this off? I'm thinking it's possible only with the one INSERT statement, by tweaking the OUTPUT clause (in a way I clearly don't understand)? Is there a smarter way that doesn't resemble the path I'm going down here? EDIT: SQL Server 2005

    Read the article

  • UITableView animations for a "lazy" UI design?

    - by donuts
    I have a UITableViewController that allows the user to perform editing tasks. Now, once a user has committed his change, the table view doesn't directly change the model and updates the table, rather "informs" the model what the user wants to do. The model in turn updates accordingly and then posts a notification that it has been changed. As far as I've seen, I need to begin/end updates on the tableview and in between change the model to its' final form. My changes though, are asynchronous and cannot guarantee to update the model before 'tableview endupdates' is called. Currently, each time I receive a 'model did change' notificaiton, I reload the entire table. So, how can I really make cell animations (delete/insert) work? Should the model fire a notification for each little change instead of the entire table?

    Read the article

  • Handling missing data

    - by soppotare
    Say I have a simple helpdesk application which logs calls made by users. I would typically have such fields in a table relating to the call e.g. CallID, Description, CustomerID etc. I Would also have a table of customers including CustomerID, Username, Password, FullName etc. Now when a user is deleted from the customers table then the inner join between the calls table and the users table to find out historically which user logged a call would produce no results. How do people usually deal with this? Have seperate customer and useraccount tables Just disable the accounts so the data is still available Record the customers name in the calls table as a seperate field. or any other methods / suggestions?

    Read the article

  • Best way to get distinct values from large table

    - by derivation
    I have a db table with about 10 or so columns, two of which are month and year. The table has about 250k rows now, and we expect it to grow by about 100-150k records a month. A lot of queries involve the month and year column (ex, all records from march 2010), and so we frequently need to get the available month and year combinations (ie do we have records for april 2010?). A coworker thinks that we should have a separate table from our main one that only contains the months and years we have data for. We only add records to our main table once a month, so it would just be a small update on the end of our scripts to add the new entry to this second table. This second table would be queried whenever we need to find the available month/year entries on the first table. This solution feels kludgy to me and a violation of DRY. What do you think is the correct way of solving this problem? Is there a better way than having two tables?

    Read the article

  • Table Design for (Currently Viewing Videos)?

    - by Surya sasidhar
    hi, I am doing a project on video portal, in that i am trying to place for currently viewing videos. People who r currently viewing that video. For this i have design the table like this table:(columns) Sno,videoid,sessionid,userid,createddate these are the columns but it is not sufficient i think if possible can u help me how can i design the table. how can we perfectly represent the currently viewing videos. Please help me thank you

    Read the article

  • Best way for an external (remote) graphics designer to style ASP.NET MVC 4 app?

    - by Tom K
    My customer has his own graphics designer he wants to use to style his web application we're building in ASP.NET MVC 4. Our solution is in Bitbucket, but if he can't run it what choices do we have? I doubt he uses Visual Studio 2012. One idea is for us to publish to our solution to a file system, send it to him, have him create a local IIS website on his machine (assuming he isn't using a Mac). Mocking data or pointing to a test SQL in Azure isn't a problem. Then he can make changes to .css and .cshtml files. Will this even work? The point is that he needs to be able to test his changes. I know he can modify the views and just check-in. But he needs to deliver a working design. So it seems inefficient. The graphics designer will have access to our test site so he can see how it works, what data we have and fields. Another idea is for him to build a static mock site using just HTML/CSS. Later I'd integrate his styles into customer's solution, split his html into partial views which we use and add Razor syntax. Again, we'd like to leverage graphics designer for all of this. Is there a best practice documented around this subject? How do other teams deal with this situation?

    Read the article

  • Question about API and Web application code sharing

    - by opendd
    This is a design question. I have a multi part application with several user types. There is a user client for the patient that interacts with a web service. There is an API evolving behind the web service that will be exposed to institutional "users" and an interface for clinicians, researchers and admin types. The patient UI is Flex. The clinician/admin portion of the application is RoR. The API is RoR/rack based. The web service component is Java WS. All components access the same data source. These components are deployed as separate components to their own subdomains. This decision was made to allow for scaling the components individually as needed. Initially, the decision was made to split the code for the RoR Web application from the RoR API. This decision was made in the interests of security and keeping the components focused on specific tasks. Over the course of time, there is necessarily going to be overlap and I am second guessing my decision to keep the code totally separate. I am noticing code being lifted from the admin side being lifted, modified and used in the API. This being the case, I have been considering merging the Ruby based repositories. I am interested in ideas and insight on this situation along with the reasoning behind your thoughts. Thanks.

    Read the article

  • Do you need all that data?

    - by BuckWoody
    I read an amazing post over on ars technica (link: http://arstechnica.com/science/news/2010/03/the-software-brains-behind-the-particle-colliders.ars?utm_source=rss&utm_medium=rss&utm_campaign=rss) abvout the LHC, or as they are also known, the "particle colliders". Beyond just the pure scientific geek awesomeness, these instruments have the potential to collect more data than you can (or possibly should) store. Actually, this problem has a lot in common with a BI system. There's so much granular detail available in the source systems that a designer has to decide how, and how much, to roll up the data. Whenver you do that, you lose fidelity, but in many cases that's OK. Take, for example, your car's speedometer. You don't actually need to track each and every point of speed as it happens. You only need to know that you're hovering around the speed limit at a certain point in time. Since this is the way that humans percieve data, is there some lesson we should take in the design of data "flows" - and what implications does this have for new technologies like StreamInsight? Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Game Components, Game Managers and Object Properties

    - by George Duckett
    I'm trying to get my head around component based entity design. My first step was to create various components that could be added to an object. For every component type i had a manager, which would call every component's update function, passing in things like keyboard state etc. as required. The next thing i did was remove the object, and just have each component with an Id. So an object is defined by components having the same Ids. Now, i'm thinking that i don't need a manager for all my components, for example i have a SizeComponent, which just has a Size property). As a result the SizeComponent doesn't have an update method, and the manager's update method does nothing. My first thought was to have an ObjectProperty class which components could query, instead of having them as properties of components. So an object would have a number of ObjectProperty and ObjectComponent. Components would have update logic that queries the object for properties. The manager would manage calling the component's update method. This seems like over-engineering to me, but i don't think i can get rid of the components, because i need a way for the managers to know what objects need what component logic to run (otherwise i'd just remove the component completely and push its update logic into the manager). Is this (having ObjectProperty, ObjectComponent and ComponentManager classes) over-engineering? What would be a good alternative?

    Read the article

  • Structuring database for multi-object "activity" and "following" functionalities

    - by romaninsh
    I am working on a web application which operate with different types of objects such as user, profiles, pages etc. All objects have unique object_id. When objects interact it may produce "activity", such as user posting on the page or profile. Activity may be related to multiple objects through their object_id. Users may also follow "objects" and they need to be able to see stream of relevant activity. Could you provide me with some data structure suggestions which would be efficient and scalable? My goal is to show activity limited to the objects which user is following I am not limited by relational databases. Update As I'm getting advices on ORM and how index things, I'd like to again, stress my question. According to my current design model the database structure looks like this: As you can see - it's quite easy to implement database like that. Activity and Follower tables do contain much larger amount of records than the upper level but it's tolerable. But when it comes for me to create a "timeline" table, it becomes a nightmare. For every user I need to reference all the object activities which he follows. In terms of records it easily gets out of control. Please suggest me how to change this structure to avoid timeline creation and also be abel to quickly retrieve activity for any given user. Thanks.

    Read the article

  • Cost to licence characters or ships for a game

    - by Michael Jasper
    I am producing a game pitch document for a university game design class, and I am looking for examples of licencing cost for using characters or ships from other IP holders in a game. For example: cost of using an X-Wing in a game, licencing from Lucas cost of using the Enterprise in a game, licencing from Paramount cost of using the Space Shuttle (if any), licencing from Nasa EDIT The closest information I can find is from an article about Nights of the Old Republic, but isn't nearly specific enough for my needs: What Kotick means by Lucas being the principal beneficiary of the success of The Old Republic is that there are most likely clauses in the license agreement that give percentages, points, or another denomination of revenue out to Lucas and his people just for the Star Wars name, and that amount is presumed to be a great deal of money. Kotick is saying that because the cost of the license is so prohibitive, as he has personally had experience with in his position as CEO of Activision Blizzard, that EA will not be able to be profitable because of the hemorrhaging of money to the licensor. EDIT 2 Another vague source stating that FOX uses a "five-figure rule" (assuming between $10,000 - $99,000) It seems FOX, like most studios, will not license individuals to create new works based upon their products. They will only commission individuals of their choosing if they elect to branch out into expanded product lines related to those licenses. Alternately, they are open to making the licencing available to large corporations with access to global markets, but only if those corporations agree to what Ms Friedman called a "five-figure guarantee". Presumably this means that the corporation seeking the licensing must agree to pay a 5-figure sum for that license, and be confident that their product will sell enough volume to recoup that fee, and to produce sufficient profits to make the acquisition worth their while. Thank you!

    Read the article

  • Browser-based GUI for a python application

    - by ack__
    I want to create a web/browser-based GUI for a command-line python application. The goal is to make use of HTML/JS technologies to create this GUI. As the application itself, it needs to run on Linux and Windows, and the interface will be accessible only from localhost (not exposed to internet). The GUI will contain 5 to 10 pages. I don't want a traditional desktop GUI that includes HTML/JS, but just a bunch of html files and some kind of controller between those and the application. I also want to make use of asynchronous programming (ajax like) so I can load and print data in the GUI without refreshing the whole page. I'd probably use jQuery for that and a couple other things. How would you recommend to design this? Performance is not the key here, I'm rather looking at reliability, portability and simplicity. I'm thinking of using a lightweight python HTTP server / framework (like CherryPy) and maybe later a Python templating system (at the begining it will just be a couple pages). EDIT: I'm looking for ideas/recommendations how to build this, not for alternatives to browser/web-based GUI.

    Read the article

  • Character progression through leveling, skills or items?

    - by Anton
    I'm working on a design for an RPG game, and I'm having some doubts about the skill and level system. I'm going for a more casual, explorative gaming experience and so thought about lowering the game complexity by simplifying character progression. But I'm having trouble deciding between the following: Progression through leveling, no complex skill progression, leveling increases base stats. Progression through skills, no leveling or base stat changes, skills progress through usage. Progression through items, more focus on stat-changing items, items confer skills, no leveling. However, I'm uncertain what the effects on gameplay might be in the end. So, my question is this: What would be the effects of choosing one of the above alternatives over the others? (Particularly with regards to the style and feel of the gameplay) My take on it is that the first sacrifices more frequent rewards and customization in favor of a simpler gameplay; the second sacrifices explicit customization and player control in favor of more frequent rewards and a somewhat simpler gameplay; while the third sacrifices inventory simplicity and a player metric in favor of player control, customization and progression simplicity. Addendum: I'm not really limiting myself to the above three, they are just the ones I liked most and am primarily interested in.

    Read the article

  • Is there a pattern or best practice for passing a reference type to multiple classes vs a static class?

    - by Dave
    My .NET application creates HTML files, and as such, the structure looks like variable myData BuildHomePage() variable graph = new BuildGraphPage(myData) variable table = BuildTablePage(myData) BuildGraphPage and BuildTablePage both require access data, the myData object. In the above example, I've passed the myData object to 2 constructors. This is what I'm doing now, in my current project. The myData object, and it's properties are all readonly. The problem is, the number of pages which will require this object has grown. In the real project, there are currently 4, but the new spec is to have about 20. Passing this object to the constructor of each new object and assigning it to a field is a little time consuming, but not a hardship! This poses the question whether it's better practice to continue as I have, or to refactor and create a new static class for myData which can be referenced from any where in my project. I guess my abilities to use Google are poor, because I did try and find an appropriate pattern as I am sure this type of design must be common place but my results returned nothing. Is there a pattern which is suited, or do best practices lean towards one implementation over another.

    Read the article

  • Designing a Content-Based ETL Process with .NET and SFDC

    - by Patrick
    As my firm makes the transition to using SFDC as our main operational system, we've spun together a couple of SFDC portals where we can post customer-specific documents to be viewed at will. As such, we've had the need for pseudo-ETL applications to be implemented that are able to extract metadata from the documents our analysts generate internally (most are industry-standard PDFs, XML, or MS Office formats) and place in networked "queue" folders. From there, our applications scoop of the queued documents and upload them to the appropriate SFDC CRM Content Library along with some select pieces of metadata. I've mostly used DbAmp to broker communication with SFDC (DbAmp is a Linked Server provider that allows you to use SQL conventions to interact with your SFDC Org data). I've been able to create [console] applications in C# that work pretty well, and they're usually structured something like this: static void Main() { // Load parameters from app.config. // Get documents from queue. var files = someInterface.GetFiles(someFilterOrRegexPattern); foreach (var file in files) { // Extract metadata from the file. // Validate some attributes of the file; add any validation errors to an in-memory // structure (e.g. List<ValidationErrors>). if (isValid) { var fileData = File.ReadAllBytes(file); // Upload using some wrapper for an ORM or DAL someInterface.Upload(fileData, meta.Param1, meta.Param2, ...); } else { // Bounce the file } } // Report any validation errors (via message bus or SMTP or some such). } And that's pretty much it. Most of the time I wrap all these operations in a "Worker" class that takes the needed interfaces as constructor parameters. This approach has worked reasonably well, but I just get this feeling in my gut that there's something awful about it and would love some feedback. Is writing an ETL process as a C# Console app a bad idea? I'm also wondering if there are some design patterns that would be useful in this scenario that I'm clearly overlooking. Thanks in advance!

    Read the article

  • Highly scalable and dynamic "rule-based" applications?

    - by Prof Plum
    For a large enterprise app, everyone knows that being able to adjust to change is one of the most important aspects of design. I use a rule-based approach a lot of the time to deal with changing business logic, with each rule being stored in a DB. This allows for easy changes to be made without diving into nasty details. Now since C# cannot Eval("foo(bar);") this is accomplished by using formatted strings stored in rows that are then processed in JavaScript at runtime. This works fine, however, it is less than elegant, and would not be the most enjoyable for anyone else to pick up on once it becomes legacy. Is there a more elegant solution to this? When you get into thousands of rules that change fairly frequently it becomes a real bear, but this cannot be that uncommon of a problem that someone has not thought of a better way to do this. Any suggestions? Is this current method defensible? What are the alternatives? Edit: Just to clarify, this is a large enterprise app, so no matter which solution works, there will be plenty of people constantly maintaining its rules and data (around 10). Also, The data changes frequently enough to say that some sort of centralized server system is basically a must.

    Read the article

  • What are the factors affecting a new programming language?

    - by Saurav Sengupta
    I am developing a new general-purpose programming language of my own design. It's currently my own personal project. I have read of some experts saying that new languages do not usually survive (unfortunately I can't find a reference to that right now). What are the most substantial problems that a new language faces? The language syntax is similar to C/Python families, it does not use S-expressions, and it is an imperative language, but I'm doing first-class functions in it to provide the facilities of currying. In particular, I am concentrating on translating the source language to an intermediate language for execution by an interpreter, but I'm not in a position to translate to native code yet. What would be the issues with that? I've not personally used many non-native code languages, so I'm not well aware of the performance issues on today's machines. I also can't decide upon a lexer and parser generator. What would be the pros and cons of Flex and Yacc vs. hand-made? And what benefits will LLVM provide? I need to get the interpreter ready as quickly as possible. Finally, what factors will affect the language's use post release? I am planning a small library of essentials and full documentation for the first phase.

    Read the article

  • Command-Query-Separation and multithreading safe interfaces

    - by Tobias Langner
    I like the command query separation pattern (from OOSC / Eiffel - basically you either return a value or you change the state of the class - but not both). This makes reasoning about the class easier and it is easier to write exception safe classes. Now, with multi threading, I run into a major problem: the separation of the query and the command basically invalidates the result from the query as anything can happen between those 2. So my question is: how do you handle command query separation in an multi-threaded environment? Clarification example: A stack with command query separation would have the following methods: push (command) pop (command - but does not return a value) top (query - returns the value) empty (query) The problem here is - I can get empty as status, but then I can not rely on top really retrieving an element since between the call of empty and the call of top, the stack might have been emptied. Same goes for pop & top. If I get an item using top, I can not be sure that the item that I pop is the same. This can be solved using external locks - but that's not exactly what I call threadsafe design.

    Read the article

  • VBO and shaders confusion, what's their connection?

    - by Jeffrey
    Considering OpenGL 2.1 VBOs and 1.20 GLSL shaders: When creating an entity like "Zombie", is it good to initialize just the VBO buffer with the data once and do N glDrawArrays() calls per each N zombies? Is there a more efficient way? (With a single call we cannot pass different uniforms to the shader to calculate an offset, see point 3) When dealing with logical object (player, tree, cube etc), should I always use the same shader or should I customize (or be able to customize) the shaders per each object? Considering an entity class, should I create and define the shader at object initialization? When having a movable object such as a human, is there any more powerful way to deal with its coordinates than to initialize its VBO object at 0,0 and define an uniform offset to pass to the shader to calculate its real position? Could you make an example of the Data Oriented Design on creating a generic zombie class? Is the following good? Zombielist class: class ZombieList { GLuint vbo; // generic zombie vertex model std::vector<color>; // object default color std::vector<texture>; // objects textures std::vector<vector3D>; // objects positions public: unsigned int create(); // return object id void move(unsigned int objId, vector3D offset); void rotate(unsigned int objId, float angle); void setColor(unsigned int objId, color c); void setPosition(unsigned int objId, color c); void setTexture(unsigned int, unsigned int); ... void update(Player*); // move towards player, attack if near } Example: Player p; Zombielist zl; unsigned int first = zl.create(); zl.setPosition(first, vector3D(50, 50)); zl.setTexture(first, texture("zombie1.png")); ... while (running) { // main loop ... zl.update(&p); zl.draw(); // draw every zombie }

    Read the article

< Previous Page | 135 136 137 138 139 140 141 142 143 144 145 146  | Next Page >