Search Results

Search found 144 results on 6 pages for 'uniqueness'.

Page 3/6 | < Previous Page | 1 2 3 4 5 6  | Next Page >

  • Uniquing with Existing Core Data Entities

    - by warrenm
    I'm using Core Data to store a lot (1000s) of items. A pair of properties on each item are used to determine uniqueness, so when a new item comes in, I compare it against the existing items before inserting it. Since the incoming data is in the form of an RSS feed, there are often many duplicates, and the cost of the uniquing step is O(N^2), which has become significant. Right now, I create a set of existing items before iterating over the list of (possible) new items. My theory is that on the first iteration, all the items will be faulted in, and assuming we aren't pressed for memory, most of those items will remain resident over the course of the iteration. I see my options thusly: Use string comparison for uniquing, iterating over all "new" items and comparing to all existing items (Current approach) Use a predicate to filter the set of existing items against the properties of the "new" items. Use a predicate with Core Data to determine uniqueness of each "new" item (without retrieving the set of existing items). Is option 3 likely to be faster than my current approach? Do you know of a better way?

    Read the article

  • Spring Data Neo4J @Indexed(unique = true) not working

    - by Markus Lamm
    I'm new to Neo4J and I have, probably an easy question. There're NodeEntitys in my application, a property (name) is annotated with @Indexed(unique = true) to achieve the uniqueness like I do in JPA with @Column(unique = true). My problem is, that when I persist an entity with a name that already exists in my graph, it works fine anyway. But I expected some kind of exception here...?! Here' s an overview over basic my code: @NodeEntity public abstract class BaseEntity implements Identifiable { @GraphId private Long entityId; ... } public class Role extends BaseEntity { @Indexed(unique = true) private String name; ... } public interface RoleRepository extends GraphRepository<Role> { Role findByName(String name); } @Service public class RoleServiceImpl extends BaseEntityServiceImpl<Role> implements { private RoleRepository repository; @Override @Transactional public T save(final T entity) { return getRepository().save(entity); } } And this is my test: @Test public void testNameUniqueIndex() { final List<Role> roles = Lists.newLinkedList(service.findAll()); final String existingName = roles.get(0).getName(); Role newRole = new Role.Builder(existingName).build(); newRole = service.save(newRole); } That's the point where I expect something to go wrong! How can I ensure the uniqueness of a property, without checking it for myself?? THANKS IN ADVANCE FOR ANY IDEAS!! P.S.: I'm using neo4j 1.8.M07, spring-data-neo4j 2.1.0.BUILD-SNAPSHOT and Spring 3.1.2.RELEASE.

    Read the article

  • Beware Sneaky Reads with Unique Indexes

    - by Paul White NZ
    A few days ago, Sandra Mueller (twitter | blog) asked a question using twitter’s #sqlhelp hash tag: “Might SQL Server retrieve (out-of-row) LOB data from a table, even if the column isn’t referenced in the query?” Leaving aside trivial cases (like selecting a computed column that does reference the LOB data), one might be tempted to say that no, SQL Server does not read data you haven’t asked for.  In general, that’s quite correct; however there are cases where SQL Server might sneakily retrieve a LOB column… Example Table Here’s a T-SQL script to create that table and populate it with 1,000 rows: CREATE TABLE dbo.LOBtest ( pk INTEGER IDENTITY NOT NULL, some_value INTEGER NULL, lob_data VARCHAR(MAX) NULL, another_column CHAR(5) NULL, CONSTRAINT [PK dbo.LOBtest pk] PRIMARY KEY CLUSTERED (pk ASC) ); GO DECLARE @Data VARCHAR(MAX); SET @Data = REPLICATE(CONVERT(VARCHAR(MAX), 'x'), 65540);   WITH Numbers (n) AS ( SELECT ROW_NUMBER() OVER (ORDER BY (SELECT 0)) FROM master.sys.columns C1, master.sys.columns C2 ) INSERT LOBtest WITH (TABLOCKX) ( some_value, lob_data ) SELECT TOP (1000) N.n, @Data FROM Numbers N WHERE N.n <= 1000; Test 1: A Simple Update Let’s run a query to subtract one from every value in the some_value column: UPDATE dbo.LOBtest WITH (TABLOCKX) SET some_value = some_value - 1; As you might expect, modifying this integer column in 1,000 rows doesn’t take very long, or use many resources.  The STATITICS IO and TIME output shows a total of 9 logical reads, and 25ms elapsed time.  The query plan is also very simple: Looking at the Clustered Index Scan, we can see that SQL Server only retrieves the pk and some_value columns during the scan: The pk column is needed by the Clustered Index Update operator to uniquely identify the row that is being changed.  The some_value column is used by the Compute Scalar to calculate the new value.  (In case you are wondering what the Top operator is for, it is used to enforce SET ROWCOUNT). Test 2: Simple Update with an Index Now let’s create a nonclustered index keyed on the some_value column, with lob_data as an included column: CREATE NONCLUSTERED INDEX [IX dbo.LOBtest some_value (lob_data)] ON dbo.LOBtest (some_value) INCLUDE ( lob_data ) WITH ( FILLFACTOR = 100, MAXDOP = 1, SORT_IN_TEMPDB = ON ); This is not a useful index for our simple update query; imagine that someone else created it for a different purpose.  Let’s run our update query again: UPDATE dbo.LOBtest WITH (TABLOCKX) SET some_value = some_value - 1; We find that it now requires 4,014 logical reads and the elapsed query time has increased to around 100ms.  The extra logical reads (4 per row) are an expected consequence of maintaining the nonclustered index. The query plan is very similar to before (click to enlarge): The Clustered Index Update operator picks up the extra work of maintaining the nonclustered index. The new Compute Scalar operators detect whether the value in the some_value column has actually been changed by the update.  SQL Server may be able to skip maintaining the nonclustered index if the value hasn’t changed (see my previous post on non-updating updates for details).  Our simple query does change the value of some_data in every row, so this optimization doesn’t add any value in this specific case. The output list of columns from the Clustered Index Scan hasn’t changed from the one shown previously: SQL Server still just reads the pk and some_data columns.  Cool. Overall then, adding the nonclustered index hasn’t had any startling effects, and the LOB column data still isn’t being read from the table.  Let’s see what happens if we make the nonclustered index unique. Test 3: Simple Update with a Unique Index Here’s the script to create a new unique index, and drop the old one: CREATE UNIQUE NONCLUSTERED INDEX [UQ dbo.LOBtest some_value (lob_data)] ON dbo.LOBtest (some_value) INCLUDE ( lob_data ) WITH ( FILLFACTOR = 100, MAXDOP = 1, SORT_IN_TEMPDB = ON ); GO DROP INDEX [IX dbo.LOBtest some_value (lob_data)] ON dbo.LOBtest; Remember that SQL Server only enforces uniqueness on index keys (the some_data column).  The lob_data column is simply stored at the leaf-level of the non-clustered index.  With that in mind, we might expect this change to make very little difference.  Let’s see: UPDATE dbo.LOBtest WITH (TABLOCKX) SET some_value = some_value - 1; Whoa!  Now look at the elapsed time and logical reads: Scan count 1, logical reads 2016, physical reads 0, read-ahead reads 0, lob logical reads 36015, lob physical reads 0, lob read-ahead reads 15992.   CPU time = 172 ms, elapsed time = 16172 ms. Even with all the data and index pages in memory, the query took over 16 seconds to update just 1,000 rows, performing over 52,000 LOB logical reads (nearly 16,000 of those using read-ahead). Why on earth is SQL Server reading LOB data in a query that only updates a single integer column? The Query Plan The query plan for test 3 looks a bit more complex than before: In fact, the bottom level is exactly the same as we saw with the non-unique index.  The top level has heaps of new stuff though, which I’ll come to in a moment. You might be expecting to find that the Clustered Index Scan is now reading the lob_data column (for some reason).  After all, we need to explain where all the LOB logical reads are coming from.  Sadly, when we look at the properties of the Clustered Index Scan, we see exactly the same as before: SQL Server is still only reading the pk and some_value columns – so what’s doing the LOB reads? Updates that Sneakily Read Data We have to go as far as the Clustered Index Update operator before we see LOB data in the output list: [Expr1020] is a bit flag added by an earlier Compute Scalar.  It is set true if the some_value column has not been changed (part of the non-updating updates optimization I mentioned earlier). The Clustered Index Update operator adds two new columns: the lob_data column, and some_value_OLD.  The some_value_OLD column, as the name suggests, is the pre-update value of the some_value column.  At this point, the clustered index has already been updated with the new value, but we haven’t touched the nonclustered index yet. An interesting observation here is that the Clustered Index Update operator can read a column into the data flow as part of its update operation.  SQL Server could have read the LOB data as part of the initial Clustered Index Scan, but that would mean carrying the data through all the operations that occur prior to the Clustered Index Update.  The server knows it will have to go back to the clustered index row to update it, so it delays reading the LOB data until then.  Sneaky! Why the LOB Data Is Needed This is all very interesting (I hope), but why is SQL Server reading the LOB data?  For that matter, why does it need to pass the pre-update value of the some_value column out of the Clustered Index Update? The answer relates to the top row of the query plan for test 3.  I’ll reproduce it here for convenience: Notice that this is a wide (per-index) update plan.  SQL Server used a narrow (per-row) update plan in test 2, where the Clustered Index Update took care of maintaining the nonclustered index too.  I’ll talk more about this difference shortly. The Split/Sort/Collapse combination is an optimization, which aims to make per-index update plans more efficient.  It does this by breaking each update into a delete/insert pair, reordering the operations, removing any redundant operations, and finally applying the net effect of all the changes to the nonclustered index. Imagine we had a unique index which currently holds three rows with the values 1, 2, and 3.  If we run a query that adds 1 to each row value, we would end up with values 2, 3, and 4.  The net effect of all the changes is the same as if we simply deleted the value 1, and added a new value 4. By applying net changes, SQL Server can also avoid false unique-key violations.  If we tried to immediately update the value 1 to a 2, it would conflict with the existing value 2 (which would soon be updated to 3 of course) and the query would fail.  You might argue that SQL Server could avoid the uniqueness violation by starting with the highest value (3) and working down.  That’s fine, but it’s not possible to generalize this logic to work with every possible update query. SQL Server has to use a wide update plan if it sees any risk of false uniqueness violations.  It’s worth noting that the logic SQL Server uses to detect whether these violations are possible has definite limits.  As a result, you will often receive a wide update plan, even when you can see that no violations are possible. Another benefit of this optimization is that it includes a sort on the index key as part of its work.  Processing the index changes in index key order promotes sequential I/O against the nonclustered index. A side-effect of all this is that the net changes might include one or more inserts.  In order to insert a new row in the index, SQL Server obviously needs all the columns – the key column and the included LOB column.  This is the reason SQL Server reads the LOB data as part of the Clustered Index Update. In addition, the some_value_OLD column is required by the Split operator (it turns updates into delete/insert pairs).  In order to generate the correct index key delete operation, it needs the old key value. The irony is that in this case the Split/Sort/Collapse optimization is anything but.  Reading all that LOB data is extremely expensive, so it is sad that the current version of SQL Server has no way to avoid it. Finally, for completeness, I should mention that the Filter operator is there to filter out the non-updating updates. Beating the Set-Based Update with a Cursor One situation where SQL Server can see that false unique-key violations aren’t possible is where it can guarantee that only one row is being updated.  Armed with this knowledge, we can write a cursor (or the WHILE-loop equivalent) that updates one row at a time, and so avoids reading the LOB data: SET NOCOUNT ON; SET STATISTICS XML, IO, TIME OFF;   DECLARE @PK INTEGER, @StartTime DATETIME; SET @StartTime = GETUTCDATE();   DECLARE curUpdate CURSOR LOCAL FORWARD_ONLY KEYSET SCROLL_LOCKS FOR SELECT L.pk FROM LOBtest L ORDER BY L.pk ASC;   OPEN curUpdate;   WHILE (1 = 1) BEGIN FETCH NEXT FROM curUpdate INTO @PK;   IF @@FETCH_STATUS = -1 BREAK; IF @@FETCH_STATUS = -2 CONTINUE;   UPDATE dbo.LOBtest SET some_value = some_value - 1 WHERE CURRENT OF curUpdate; END;   CLOSE curUpdate; DEALLOCATE curUpdate;   SELECT DATEDIFF(MILLISECOND, @StartTime, GETUTCDATE()); That completes the update in 1280 milliseconds (remember test 3 took over 16 seconds!) I used the WHERE CURRENT OF syntax there and a KEYSET cursor, just for the fun of it.  One could just as well use a WHERE clause that specified the primary key value instead. Clustered Indexes A clustered index is the ultimate index with included columns: all non-key columns are included columns in a clustered index.  Let’s re-create the test table and data with an updatable primary key, and without any non-clustered indexes: IF OBJECT_ID(N'dbo.LOBtest', N'U') IS NOT NULL DROP TABLE dbo.LOBtest; GO CREATE TABLE dbo.LOBtest ( pk INTEGER NOT NULL, some_value INTEGER NULL, lob_data VARCHAR(MAX) NULL, another_column CHAR(5) NULL, CONSTRAINT [PK dbo.LOBtest pk] PRIMARY KEY CLUSTERED (pk ASC) ); GO DECLARE @Data VARCHAR(MAX); SET @Data = REPLICATE(CONVERT(VARCHAR(MAX), 'x'), 65540);   WITH Numbers (n) AS ( SELECT ROW_NUMBER() OVER (ORDER BY (SELECT 0)) FROM master.sys.columns C1, master.sys.columns C2 ) INSERT LOBtest WITH (TABLOCKX) ( pk, some_value, lob_data ) SELECT TOP (1000) N.n, N.n, @Data FROM Numbers N WHERE N.n <= 1000; Now here’s a query to modify the cluster keys: UPDATE dbo.LOBtest SET pk = pk + 1; The query plan is: As you can see, the Split/Sort/Collapse optimization is present, and we also gain an Eager Table Spool, for Halloween protection.  In addition, SQL Server now has no choice but to read the LOB data in the Clustered Index Scan: The performance is not great, as you might expect (even though there is no non-clustered index to maintain): Table 'LOBtest'. Scan count 1, logical reads 2011, physical reads 0, read-ahead reads 0, lob logical reads 36015, lob physical reads 0, lob read-ahead reads 15992.   Table 'Worktable'. Scan count 1, logical reads 2040, physical reads 0, read-ahead reads 0, lob logical reads 34000, lob physical reads 0, lob read-ahead reads 8000.   SQL Server Execution Times: CPU time = 483 ms, elapsed time = 17884 ms. Notice how the LOB data is read twice: once from the Clustered Index Scan, and again from the work table in tempdb used by the Eager Spool. If you try the same test with a non-unique clustered index (rather than a primary key), you’ll get a much more efficient plan that just passes the cluster key (including uniqueifier) around (no LOB data or other non-key columns): A unique non-clustered index (on a heap) works well too: Both those queries complete in a few tens of milliseconds, with no LOB reads, and just a few thousand logical reads.  (In fact the heap is rather more efficient). There are lots more fun combinations to try that I don’t have space for here. Final Thoughts The behaviour shown in this post is not limited to LOB data by any means.  If the conditions are met, any unique index that has included columns can produce similar behaviour – something to bear in mind when adding large INCLUDE columns to achieve covering queries, perhaps. Paul White Email: [email protected] Twitter: @PaulWhiteNZ

    Read the article

  • Federated access to desktop and network resources in MS AD domains

    - by Glenn Stauffer
    We are looking for a way to provide members of three loosely connected organizations with access to authenticated resources such as file shares, printers, and lab computers. I've seen federation facilities for web resources; is ther something similar for domain logins? Our Active directory domains are not connected so we would have to use email addresses for the username to insure uniqueness. Is there any openid like mechanism that works for AD logins?

    Read the article

  • [YYYY].[MM].[DD].[hh][mm] vs. [major].[minor].[revision] [closed]

    - by ef2011
    Possible Duplicate: What “version naming convention” do you use? I am currently debating between the traditional versioning convention [major].[minor].[revision] and my own, almost whimsical, [YYYY].[MM].[DD].[hh][mm] for a new project I am starting. I understand that [major].[minor].[revision] is probably the most popular versioning method on the planet and it is indeed pretty straightforward and reasonable, except that determining which changes merit the label "major", "minor" or even "revision" could be... subjective. A versioning system based on a timestamp is purely non-subjective and guarantees uniqueness. Which one would you choose for your project and why?

    Read the article

  • What's the progress on Haskell records?

    - by mmh
    Recently I stumbled once again on the issues of Haskells records, in particular the uniqueness of field names (it's a pain ...) I already read A proposal for records in Haskell from SPJ and Greg Morrisett but it's last update was 2003. Another paper Lightweight Extensible Records for Haskell from SPJ and Mark Jones is even older: It's from a Haskell workshop in 1999. Now I wonder if the process of giving Haskell new records made any progress. Does anybody know something about it or can point me to some further reading ?

    Read the article

  • SQL conditional row insert

    - by Pablo
    Is it possible to insert a new row if a condition is meet? For example, i have this table with no primary key nor uniqueness +----------+--------+ | image_id | tag_id | +----------+--------+ | 39 | 8 | | 8 | 39 | | 5 | 11 | +----------+--------+ I would like to insert a row if a combination of image_id and tag_id doesn't exists for example; INSERT ..... WHERE image_id!=39 AND tag_id!=8

    Read the article

  • Are GUID primary keys bad in theory, or just practice?

    - by Yarin
    Whenever I design a database I automatically start with an auto-generating GUID primary key for each of my tables (excepting look-up tables) I know I'll never lose sleep over duplicate keys, merging tables, etc. To me it just makes sense philosophically that any given record should be unique across all domains, and that that uniqueness should be represented in a consistent way from table to table. I realize it will never be the most performant option, but putting performance aside, I'd like to know if there are philosophical arguments against this practice?

    Read the article

  • What are the reasons *not* to use a GUID for a primary key?

    - by Yarin
    Whenever I design a database I automatically start with an auto-generating GUID primary key for each of my tables (excepting look-up tables) I know I'll never lose sleep over duplicate keys, merging tables, etc. To me it just makes sense philosophically that any given record should be unique across all domains, and that that uniqueness should be represented in a consistent way from table to table. I realize it will never be the most performant option, but putting performance aside, I'd like to know if there are philosophical arguments against this practice?

    Read the article

  • Meshing different systems of keys together in XML Schema

    - by Tom W
    Hello SO, I'd like to ask people's thoughts on an XSD problem I've been pondering. The system I am trying to model is thus: I have a general type that represents some item in a hypothetical model. The type is abstract and will be inherited by all manner of different model objects, so the model is heterogeneous. Furthermore, some types exist only as children of other types. Objects are to be given an identifier, but the scope of uniqueness of this identifier varies. Some objects - we will call them P (for Parent) objects - must have a globally unique identifier. This is straightforward and can use the xs:key schema element. Other objects (we can call them C objects, for Child) are children of a P object and must have an identifier that is unique only in the scope of that parent. For example, object P1 has two children, object C1 and C2, and object P2 has one child, object C3. In this system, the identifiers given could be as follows: P1: 1 (1st P object globally) P2: 2 (2nd P object globally) C1: 1 (1st C object of P1) C2: 2 (2nd C object of P1) C3: 1 (1st C object of P2) I want the identity syntax of every model object to be identical if possible, therefore my first pass at implementing is to define a single type: <xs:complexType name="ModelElement"> <xs:attribute name="IDMode" type="IdentityMode"/> <xs:attribute name="Identifier" type="xs:string"/> </xs:complexType> where the IdentityMode is an enumerated value: <xs:simpleType name="IdentityMode"> <xs:restriction base="xs:string"> <xs:enumeration value="Identified"/> <xs:enumeration value="Indexed"/> <xs:enumeration value="None"/> </xs:restriction> </xs:simpleType> Here "Identified" signifies a global identifier, and "Indexed" indicates an identifier local only to the parent. My question is, how do I enforce these uniqueness conditions using unique, key or other schema elements based on the IdentityMode property of the given subtype of ModelElement?

    Read the article

  • How to use Groovy Set for unique elements?

    - by Guy
    As simple as this must be I still can't understand where am I wrong: class A { boolean equals(o) { true } } def s = [new A(), new A()] as Set assert s.size() == 1 // Assertion failed: actually gives 2 Which method should I override in order to get uniqueness?

    Read the article

  • Rails forms: render different actions based on validation

    - by Martin Petrov
    Is it possible to render different actions based on what fails at validation? For example - I have one field in the form - email addres. It is validated like this: validates :email, :presence => true, :uniqueness => { :case_sensitive => false } In the controller: def create @user = User.new(params[:user]) if @user.save redirect_to somewhere else # render :new if email is blank # redirect_to somwhere if email is taken end end

    Read the article

  • A BYOD World in Mobile Enterprise Brings the Need to Adapt

    - by Webgui
    Yesterday brought a lot of news coverage that Cisco has stopped funding and planning its Cius enterprise-grade tablet.  Citing “market transitions” in which an increasing number of people b ring their own smartphones and tablets to work, Cisco General Manager OJ Winge said in a post on the company's official blog that “Cisco will no longer invest in the Cisco Cius tablet form factor, and no further enhancements will be made to the current Cius endpoint beyond what’s available today.”  Employees are “bringing their preferences to work” and collaboration “has to happen beyond a walled garden,” he said.The blog post also cited a recently released Cisco study which found that 95% of organizations surveyed allow employee-owned devices in some way, shape or form in the office, and, 36% of surveyed enterprises provide full support for employee-owned devices.   How is Cisco planning to move forward to adapt to this changing business environment?  Instead of focusing on tablets for enterprise customers, Cisco will instead "double down" on software that works across a variety of operating systems and smart phones and tablets, Winge said.See the post from the Cisco blog here - http://blogs.cisco.com/collaboration/empowering-choice-in-collaboration/ We at Gizmox recognize this need to adapt to the changing environment.  Our Enterprise Mobile solution is designed and built for that post-PC, BYOD business world.  We recognized the importance of providing a cross-platform solution that can easily target different devices and operating systems. We went with a web-based mobile application approach in order to achieve that and we decided to go with the new open web standard - HTML5.Our solution however provides both client and the server side programming and its uniqueness is that it allows those cross-platform HTML5 mobile applications while developing within Visual Studio using classic visual form based development. As a result, .NET developers can build secure, efficient, data-centric enterprise mobile application for cross platform mobile devices with their existing skills and tools.  See our new video about our EnterpriseMobile solution Enterprise applications today need to work on all devices, across different platforms and OS’s.  It’s just a fact of life.  How about you – do you bring your own device to work?  What’s your company’s BYOD policy?

    Read the article

  • Announcing Fusion Middleware Innovation Awards

    - by Michelle Kimihira
    Author: Neela Chaudhari Every year at OpenWorld, Oracle announces the winners to its most prestigious awards in Middleware, the Fusion Middleware Innovation Awards. This year, we’ll be announcing the winners and highlighting a few of their original implementations during this key session in the Middleware stream: 11:45 AM on Tuesday, October 2nd, CON9162 Oracle Fusion Middleware: Meet This Year's Most Impressive Customer Projects in Moscone West, 3001. In addition, we’ll give a sneak peak of a few winners during GEN9394: Fusion Middleware General Session with Hasan Rizvi at 10:15 AM on Tuesday, October 2nd in Moscone West, Hall D! What kinds of customers win the Fusion Middleware Innovation Awards? Winners are selected based on the uniqueness of their business case, business benefits, level of impact relative to the size of the organization, complexity and magnitude of implementation, and the originality of architecture. The winners are selected by a panel of judges that score each entry across multiple different scoring categories. This year, the following categories included: Oracle Exalogic Cloud Application Foundation Service Integration (SOA) and BPM WebCenter Identity Management Data Integration Application Development Framework and Fusion Development Business Analytics (BI, EPM and Exalytics) Last year at OpenWorld 2011 we had standing room only in our session, so come early!  We had over 30 innovative customers that won the award, including companies like BT, Choice Hotels, Electronic Arts, Clorox Company, ING, Dunkin Brands, Telenor, Haier, AT&T, Manpower, Herbal Life and many others. Did you miss your chance this year to nominate your company? Come join with us in the awards session to get an edge in your next year’s submission and watch for the next opportunity for 2013 on this blog. There’s other awards as part of Oracle’s Excellence awards program or subscribe to our regular Fusion Middleware newsletter to get the latest reminders.

    Read the article

  • Actually utilizing relational databases for entity systems

    - by Marc Müller
    Recently I was researching several entity systems and obviously I came across T=Machine's fantastic articles on the subject. In Part 5 of the series the author uses a relational schema to explain how an entity system is built and works. Since reading this, I have been wondering whether or not actually using a compact SQL library would be fast enough for real-time usage in video games. Performance seems to be the main issue with a full blown SQL database for management of all entities and components. However, as mentioned in T=Machine's post, basically all access to data inside the SQLDB is done sequentlially by each system over each component. Additionally, using a library like SQLite, one could easily improve performance by storing the entity data exclusively in RAM to increase access speeds. Disregarding possible performance issues, using a SQL database, in my opinion, would allow for a very intuitive implementation of entity systems and bring a long certain other benefits like easy de/serialization of game states and consistency checks like the uniqueness of entity IDs. Edit for clarification: The main question was whether using a SQL database for the actual entity management (not just storing the game state on the disk) in a real-time game would still yield a framerate appropriate for a game or even if someone is aware of projects that demonstrate SQL in a video game.

    Read the article

  • 2012 Oracle Fusion Middleware Innovation Awards Announced

    - by Tanu Sood
    Guest Contributor: Margaret Harrist. Originally posted on Oracle NewsCentral Companies from around the world were honored Tuesday for their innovative solutions using Oracle Fusion Middleware. This year’s 27 award winners, representing 11 countries and a wide span of industries, wowed the judges with a range of projects across eight product categories. A panel of judges scored each entry across multiple categories, including the uniqueness of their business case, business benefits, level of impact relative to the size of the organization, complexity and magnitude of implementation, and the architecture’s originality. In a general session just before the award presentation, Oracle Executive Vice President Hasan Rizvi highlighted a few of the winners’ original implementations, including Nike, Los Angeles Department of Water and Power, and Nintendo of America. Congratulations to the 2012 winners: Oracle Exalogic: Netshoes, Claro, UL, and Ingersoll Rand Oracle Cloud Application Foundation: Mazda Motor Corporation, HOTELBEDS Technology, Globalia, Nike, and Comcast Corporation Oracle SOA and Oracle BPM: NTT Docomo, Schneider National, Amadeus, and Motability Oracle WebCenter: News Limited, University of Louisville, China Mobile Jiangsu, Life Technologies Oracle Identity Management: Education Testing Service and Avea Oracle Data Integration: Raymond James and William Morrison Supermarkets Oracle Application Development Framework and Oracle Fusion Development: Qualcomm, Micros Systems, and Marfin Egnatia Bank Business Analytics (Oracle BI, Oracle EPM, Oracle Exalytics): INC Research, Experian, and Hologic

    Read the article

  • Congratulations to 2012 Innovation Award winners in BPM category

    - by Manoj Das
    Last year many of our customers went live on BPM 11g. It is my extreme pleasure to congratulate two of them – Amadeus and Navistar – for being awarded Oracle Fusion Middleware Innovation Award at Oracle OpenWorld 2012. We invited our customers to submit their most innovative BPM implementations that have delivered substantiated value to them. This year we saw more than 20 submissions from our customers seeing significant business value from their live BPM 11g deployments. The submissions came from across the world, spanning various industry verticals including manufacturing, healthcare, logistics, Hi-Tech, Public Sector, Education and covering many process usage patterns. Award submissions were evaluated based on the uniqueness of their business case, business benefits, level of impact relative to the size of the organization, complexity and magnitude of implementation, and the originality of architecture. Amadeus Team Receiving Innovation Award from Hasan Rizvi Congratulations to Amadeus and Navistar and their teams on being recognized from among some very strong submissions and more importantly for the business value delivered. It is an honor to be part of your success and to play a small role in the innovation you drive. Navistar is a leading truck manufacturing company which produces International® brand commercial and military trucks, MaxxForce® brand diesel engines, IC Bus™ brand school and commercial buses, and Navistar RV brands of recreational vehicles. The company also provides truck and diesel engine service parts. Amadeus is a leading transaction processor for the global travel and tourism industry, providing transaction processing power and technology solutions to both travellers and travel providers. Both Navistar and Amadeus have leveraged Oracle BPM Suite to improve visibility into their business and made their business more agile and efficient. We congratulate them again and wish them continued success in their business and future BPM initiatives.

    Read the article

  • What should a domain object's validation cover?

    - by MarcoR88
    I'm trying to figure out how to do validation of domain objects that need external resources, such as data mappers/dao Firstly here's my code class User { const INVALID_ID = 1; const INVALID_NAME = 2; const INVALID_EMAIL = 4; int getID(); void setID(Int i); string getName(); void setName(String s); string getEmail(); void setEmail(String s); int getErrorsForInsert(); // returns a bitmask for INVALID_* constants int getErrorsForUpdate(); } My worries are about the uniqueness of the email, checking it would require the storage layer. Reading others' code seems that two solutions are equally accepted: both perform the unique validation in data mapper but some set an error state to the DO user.addError(User.INVALID_EMAIL) while others prefer to throw a totally different type of exception that covers only persistence, like: UserStorageException { const INVALID_EMAIL = 1; const INVALID_CITY = 2; } What are the pros and cons of these solutions?

    Read the article

  • How to keep background requests in sequence

    - by Jason Lewis
    I'm faced with implementing interfaces for some rather archaic systems, for handling online deposits to stored value accounts (think campus card accounts for students). Here's my dilemma: stage 1 of the process involves passing the user off to a thrid-party site for the credit card transaction, like old-school PayPal. Step two involves using a proprietary protocol for communicating with a legacy system for conducting the actual deposit. Step two requires that each transaction have a unique sequence number, and that the requests' seqnums are in order. Since we're logging each transaction in Postgres, my first thought was to take a number from a sequence in the DB, guaranteeing uniqueness. But since we're dealing with web requests that might come in near-simultaneously, and since latency with the return from the off-ste payment processor is beyond our control, there's always the chance for a race condition in the order of requests passed back to the proprietary system, and if the seqnums are out of order, the request fails silently (brilliant, right?). I thought about enqueuing the requests in Redis and using Resque workers to process them (single worker, single process, so they are processed in order), but we need to be able to give the user feedback as to whether the transaction was processed successfully, so this seems less feasible to me. I've tried to make this application handle concurrency well (as much as possible for a Ruby on Rails app), but now we're in a situation where we have to interact with a system that is designed to be single process, single threaded, and sequential. If it at least gave an "out of order" error, I could just increment (or take the next value off the sequence), but it's designed to fail silently in the event of ANY error. We are handling timeouts in a way that blocks on I/O, but since the application uses multiple workers (Unicorn), that's no guarantee. Any ideas/suggestions would be appreciated.

    Read the article

  • Hash Algorithm Randomness Visualization

    - by clstroud
    I'm curious if anyone here has any idea how the images were generated as shown in this response: Which hashing algorithm is best for uniqueness and speed? Ian posted a very well-received response but I can't seem to understand how he went about making the images. I hate to make a new question dedicated to this, but I can't find any means to ask him more directly. On the other hand, perhaps someone has an alternative perspective. The best I can personally come up with would be to have it almost like a bar graph, which would illustrate how evenly the buckets of the hash table are being generated. I have a working Cocoa program that does this, but it can't generate anything like what he showed there. So the question is two fold I suppose: A) How does one truly interpret the data he shows? Is it more than "less whitespace = better"? B) How does one generate such an image based on some set of inputs, a hash, and an index? Perhaps I'm misunderstanding entirely, but I really would like to know more about this particular visualization technique. Or maybe I'm mis-applying this to hash tables rather than just hashes in general, but in that case I don't know how it would be "bounded" for the image.

    Read the article

  • 2012 Oracle Fusion Innovation Awards - Part 1

    - by Michelle Kimihira
    Author: Moazzam Chaudry This year we recognized 29 customers for their innovative use of Oracle Fusion Middleware and their significant results. The winners were selected across 8 product categories from 11 countries spanning diverse industries around the world. This is a two-part blog series. The 2012 Fusion Middleware Innovation Awards winners were announced at OOW on October 2nd by Hasan Rizvi (EVP Fusion Middleware and Java development), Amit Zavery (VP Product Management) and Ed Zou (VP Product Management) to an audience that included press, analysts and customers. Winners were selected based on the uniqueness of their business case, business benefits, level of impact relative to the size of the organization, complexity and magnitude of implementation, and the originality of architecture. The program is in its 6th year and this year, we are excited to have received over 250 submissions from customers around the globe. The winners were selected by a panel of internal and external judges; it was a difficult time selecting this year's most innovative projects. Judges scored each entry across multiple scoring categories. This year, winning use cases for Fusion Middleware include: Improve customer experience by monitoring real-time and simplifying user experience of tens of millions of customer Drive social enagement through social media channels in fields, including healthcare, harness big data by analyzing and improving visibility across 60M+customers and hundreds of terabytes of data Enable mobile adoption by delivering mobile news experience to 50% of the Australian population, embrace cloud computing by delivering hospitality services to 3000+ hotels and monitoring services to hospitals, and optimize criticial processes such as, remarketing cars through tens of thousands of dealers On Monday's blog, we will talk about the winners in each category and what customers had to say in the customer panel. Congratulations to the 2012 Oracle Fusion Innovation Award winners:  

    Read the article

  • Unique Alpha numeric generator

    - by AAA
    Hi, I want to give our users in the database a unique alpha-numeric id. I am using the code below, will this always generate a unique id? Below is the updated version of the code: old php: // Generate Guid function NewGuid() { $s = strtoupper(md5(uniqid(rand(),true))); $guidText = substr($s,0,8) . '-' . substr($s,8,4) . '-' . substr($s,12,4). '-' . substr($s,16,4). '-' . substr($s,20); return $guidText; } // End Generate Guid $Guid = NewGuid(); echo $Guid; echo "<br><br><br>"; New PHP: // Generate Guid function NewGuid() { $s = strtoupper(uniqid("something",true)); $guidText = substr($s,0,8) . '-' . substr($s,8,4) . '-' . substr($s,12,4). '-' . substr($s,16,4). '-' . substr($s,20); return $guidText; } // End Generate Guid $Guid = NewGuid(); echo $Guid; echo "<br><br><br>"; Will the second (new php) code guarantee 100% uniqueness. Final code: PHP // Generate Guid function NewGuid() { $s = strtoupper(uniqid(rand(),true)); $guidText = substr($s,0,8) . '-' . substr($s,8,4) . '-' . substr($s,12,4). '-' . substr($s,16,4). '-' . substr($s,20); return $guidText; } // End Generate Guid $Guid = NewGuid(); echo $Guid; $alphabet = '123456789abcdefghijkmnopqrstuvwxyzABCDEFGHJKLMNPQRSTUVWXYZ'; function base_encode($num, $alphabet) { $base_count = strlen($alphabet); $encoded = ''; while ($num >= $base_count) { $div = $num/$base_count; $mod = ($num-($base_count*intval($div))); $encoded = $alphabet[$mod] . $encoded; $num = intval($div); } if ($num) $encoded = $alphabet[$num] . $encoded; return $encoded; } function base_decode($num, $alphabet) { $decoded = 0; $multi = 1; while (strlen($num) > 0) { $digit = $num[strlen($num)-1]; $decoded += $multi * strpos($alphabet, $digit); $multi = $multi * strlen($alphabet); $num = substr($num, 0, -1); } return $decoded; } echo base_encode($Guid, $alphabet); } So for more stronger uniqueness, i am using the $Guid as the key generator. That should be ok right?

    Read the article

  • Selecting unique records in XSLT/XPath

    - by Daniel I-S
    I have to select only unique records from an XML document, in the context of an <xsl:for-each> loop. I am limited by Visual Studio to using XSL 1.0. <availList> <item> <schDate>2010-06-24</schDate> <schFrmTime>10:00:00</schFrmTime> <schToTime>13:00:00</schToTime> <variousOtherElements></variousOtherElements> </item> <item> <schDate>2010-06-24</schDate> <schFrmTime>10:00:00</schFrmTime> <schToTime>13:00:00</schToTime> <variousOtherElements></variousOtherElements> </item> <item> <schDate>2010-06-25</schDate> <schFrmTime>10:00:00</schFrmTime> <schToTime>12:00:00</schToTime> <variousOtherElements></variousOtherElements> </item> <item> <schDate>2010-06-26</schDate> <schFrmTime>13:00:00</schFrmTime> <schToTime>14:00:00</schToTime> <variousOtherElements></variousOtherElements> </item> <item> <schDate>2010-06-26</schDate> <schFrmTime>10:00:00</schFrmTime> <schToTime>12:00:00</schToTime> <variousOtherElements></variousOtherElements> </item> </availList> The uniqueness must be based on the value of the three child elements: schDate, schFrmTime and schToTime. If two item elements have the same values for all three child elements, they are duplicates. In the above XML, items one and two are duplicates. The rest are unique. As indicated above, each item contains other elements that we do not wish to include in the comparison. 'Uniqueness' should be a factor of those three elements, and those alone. I have attempted to accomplish this through the following: availList/item[not(schDate = preceding:: schDate and schFrmTime = preceding:: schFrmTime and schToTime = preceding:: schToTime)] The idea behind this is to select records where there is no preceding element with the same schDate, schFrmTime and schToTime. However, its output is missing the last item. This is because my XPath is actually excluding items where all of the child element values are matched within the entire preceding document. No single item matches all of the last item's child elements - but because each element's value is individually present in another item, the last item gets excluded. I could get the correct result by comparing all child values as a concatenated string to the same concatenated values for each preceding item. Does anybody know of a way I could do this?

    Read the article

  • index help for a MySQL query using greater-than operator and ORDER BY

    - by Jaymon
    I have a table with at least a couple million rows and a schema of all integers that looks roughly like this: start stop first_user_id second_user_id The rows get pulled using the following queries: SELECT * FROM tbl_name WHERE stop >= M AND first_user_id=N AND second_user_id=N ORDER BY start ASC SELECT * FROM tbl_name WHERE stop >= M AND first_user_id=N ORDER BY start ASC I cannot figure out the best indexes to speed up these queries. The problem seems to be the ORDER BY because when I take that out the queries are fast. I've tried all different types of indexes using the standard index format: ALTER TABLE tbl_name ADD INDEX index_name (index_col_1,index_col_2,...) And none of them seem to speed up the queries. Does anyone have any idea what index would work? Also, should I be trying a different type of index? I can't guarantee the uniqueness of each row so I've avoided UNIQUE indexes. Any guidance/help would be appreciated. Thanks!

    Read the article

< Previous Page | 1 2 3 4 5 6  | Next Page >