Search Results

Search found 3758 results on 151 pages for 'efficient'.

Page 79/151 | < Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >

  • SQL DB design to support user feeds (in application like facebook)

    - by Yoav
    I have a social network server with a MySql DB. I want to show the users feeds like done in Facebook. Example - UserX now Friend with userY, userX did like on postX etc. Currently I have table: C1 : UserId C2 : LogType (now friend, did like etc) C3 : ObjectId (Can be userId or postId) - set depending on the LogType. Currently to get all related logs to show to the user I do the following queries: 1. Get All user Friends userIds 2. Query all rows which C1 is in userIds (I query completed) 3. Scan the DB and see - if LogType equals DidLike, check if post's OwnerId is the userId - if yes add it to logs. And so on. Obvious this is not efficient at all. I am looking for a better way. I thought I had in mind: Create a new table (in addition to the Log table) C1 : UserId C2 : LogId (from Log table) C3 : UserID of the one who did the action When querying logs - look in the table and get related Logs (by LogId) from LogTable. Updating the table: Whenever user doing action that should be in the log: 1. Add the Log entry to LogTable. 2. Scan the DB and see which users are interested with the Log (Who my friends are, Who is the owner of the post) and add related entries to the new table. (must be done in BG). 3. If user UNFRIEND another user - then look in the logs for all rows where C3 == UNFRIENDED user id and delete them. Any opinions? Other suggestions?

    Read the article

  • Why is quicksort better than other sorting algorithms in practice?

    - by Raphael
    This is a repost of a question on cs.SE by Janoma. Full credits and spoils to him or cs.SE. In a standard algorithms course we are taught that quicksort is O(n log n) on average and O(n²) in the worst case. At the same time, other sorting algorithms are studied which are O(n log n) in the worst case (like mergesort and heapsort), and even linear time in the best case (like bubblesort) but with some additional needs of memory. After a quick glance at some more running times it is natural to say that quicksort should not be as efficient as others. Also, consider that students learn in basic programming courses that recursion is not really good in general because it could use too much memory, etc. Therefore (and even though this is not a real argument), this gives the idea that quicksort might not be really good because it is a recursive algorithm. Why, then, does quicksort outperform other sorting algorithms in practice? Does it have to do with the structure of real-world data? Does it have to do with the way memory works in computers? I know that some memories are way faster than others, but I don't know if that's the real reason for this counter-intuitive performance (when compared to theoretical estimates).

    Read the article

  • Should I listen to my employer and use CASE tools?

    - by omsharp
    My employer (Not a Developer) thinks that CASE tools will help us improve our development process and documentation. I am not sure about that, we are a small team of 5 developers building mobile banking solutions for local clients. I think CASE tools will be a waste of time and money as they need to be purchased and we will need some time before we get used to them and be efficient working with them for modeling and stuff. Code generation is another issue, I really think that the CASE generated code won't be as good as code written by good developers. I think that if we stick with agile princeliness, design patterns, use TDD, and keep our code clean. we should be good. And as far as Analysis and Design, I think simple UML diagrams on whiteboard should do the trick. Documentation is good and important, but should be made as little as possible and we should not focus on Docs and forget the code. This is what i think. Am I correct? or should I listen to my employer and start researching for an appropriate CASE Tool?

    Read the article

  • Project Management Software / 1 maybe 2 developers

    - by Ominus
    I am looking for software that I can use to "manage" multiple projects (5 - 10). Here are the features I would like but any recommendation is welcome. Bug/Feature tracking on a per project basis. Some way to keep all documents, diagrams, specs, requirements, in one place with the project. Better yet a tool where all these things or most of them could be authored. Task management during the development phase with milestones and estimates/actuals. Git integration I have been doing contract work and i have been doing really well for myself as far as getting projects but its becoming VERY hard to manage everything in an efficient manner. I am trying to learn about best practices when it comes to software programming methodologies and the more I read the more i realize that I am just managing these projects poorly. I am getting things done but the more I take on the less "solid" everything is. I am afraid if I don't get some good solid tools/practices in place I am going to do my customers and myself a disservice. The problem is that there are SO many options that its hard to weed through them all. I was at a point today where I had decided that I would just code my own (there is some irony here)! Obviously everyone has their likes dislikes I would love to hear from some of you lone programmers and how you manage everything since our needs aren't exactly the same thing that a large team might need. I also want a solution that can scale to 2 maybe 3 developers if I end up hiring some people to help with my work load. Thanks again for your usual insights!

    Read the article

  • Pair Programming, for or against? [on hold]

    - by user1037729
    I believe it has many advantages over individual programming: Pros By pairing senior with relatively junior staff, the more junior can get up to speed with both project and computing experience, and the senior will re-think the problem in order to communicate with the junior, thus re-checking his own thinking (rubber duck principle!). At least 2 people will know about any single piece of work, if one person is away the other can cover, or if some one leaves a project knowledge transfer is easier. Two brains on a complex task is more effective, communication keeps the work free flowing and provides redundancy in decision making. Code is effectively reviewed as its being written, no need for a separate reviewing phase which requires a context switch as someone who has not been working on the piece in question would be required to understand and review the related code. Reviewing code on your own which you haven't written or architected is not fun, hence counter productive. Cons Less bandwith for performing tasks, lets say we have 4 devs, pair programming requires 2 devs per task, so we would be doing 2 tasks concurrently as a posed to 4. I believe this "Con" does not stand up as the pair programmed task would complete sooner and comes with a review built in for free! Ie the pair programming task would be more efficient and thus free up resources earlier. Less flexibility to chop and change tasks as two developers are tied into a task, when flexibility is required this could be a problem.

    Read the article

  • Sorting for 2D Drawing

    - by Nexian
    okie, looked through quite a few similar questions but still feel the need to ask mine specifically (I know, crazy). Anyhoo: I am drawing a game in 2D (isometric) My objects have their own arrays. (i.e. Tiles[], Objects[], Particles[], etc) I want to have a draw[] array to hold anything that will be drawn. Because it is 2D, I assume I must prioritise depth over any other sorting or things will look weird. My game is turn based so Tiles and Objects won't be changing position every frame. However, Particles probably will. So I am thinking I can populate the draw[] array (probably a vector?) with what is on-screen and have it add/remove object, tile & particle references when I pan the screen or when a tile or object is specifically moved. No idea how often I'm going to have to update for particles right now. I want to do this because my game may have many thousands of objects and I want to iterate through as few as possible when drawing. I plan to give each element a depth value to sort by. So, my questions: Does the above method sound like a good way to deal with the actual drawing? What is the most efficient way to sort a vector? Most of the time it wont require efficiency. But for panning the screen it will. And I imagine if I have many particles on screen moving across multiple tiles, it may happen quite often. For reference, my screen will be drawing about 2,800 objects at any one time. When panning, it will be adding/removing about ~200 elements every second, and each new element will need adding in the correct location based on depth.

    Read the article

  • New Certification Exam: "Oracle Database 12c: SQL Fundamentals" Released (1Z0-061)

    - by Brandye Barrington
    Oracle Certification begins testing this week for the new Oracle Database 12c Administrator Certified Associate (OCA) certification.  Testing for the Oracle Database 12c: SQL Fundamentals (1Z0-061) exam is now underway. Visit pearsonvue.com/oracle and register for exam 1Z0-061. You can get all preparation details, including exam objectives, number of questions, time allotments, and pricing on the Oracle Certification Website. Earning the Oracle Database 12c Administrator Certified Associate (OCA) credential demonstrates that you carry the foundational knowledge and skills needed to administer the Oracle Database, and sets the stage for your future progression to Oracle Database 12c Administrator Certified Professional (OCP). With Oracle Database 12c, you will experience the benefits of an Oracle Database that is re-engineered for Cloud computing. Multitenant architecture brings enterprises unprecedented hardware and software efficiencies, performance and manageability benefits, and fast and efficient Cloud provisioning. Oracle Database 12c certifications emphasize the full set of skills that DBAs need in today's competitive marketplace. Be among the first to obtain this ground breaking new Oracle Certified Associate (OCA) certification by registering for this exam today. QUICK LINKS Certification Path: Oracle Database 12c Administrator Certified Associate (OCA) Certification Exam: Oracle Database 12c: SQL Fundamentals (1Z0-061) Registration: pearsonvue.com/oracle

    Read the article

  • would it be bad to put <span> tags within the <head>, for grouping meta data in schema.org format?

    - by hdavis84
    Alright, I'm currently practicing schema.org microdata, and trying to find the best route for every site I build. I have found that i can piggyback itemprops on open graph meta tags. I would like to piggyback more itemprops on opengraph meta tags. However, schema.org requires you to change itemtypes to define all aspects of a "thing". Say I'm defining a LocalBusiness. Open graph has street address, locality, and region i'd like to piggyback on. I'd have to do something like: <html lang="en" itemscope itemtype="http://schema.org/LocalBusiness"> <head> ... <meta itemprop="name" content="Business Name" /> <meta property="og:url" itemprop="url" content="http://example.com" /> <meta property="og:image" itemprop="image" content="http://example.com/logo.png" /> <span itemprop="address" itemscope itemtype="http://schema.org/PostalAddress"> <meta property="og:street-address" itemprop="streetAddress" content="1234 Amazing Rd." /> <meta property="og:locality" itemprop="addressLocality" content="Greenfield" /> <meta property="og:region" itemprop="addressRegion" content="IN" /> </span> </head> Although there's more that can be added in, this is enough of an example to show what I'm trying to achieve. I've searched the web to see if it is an issue to use spans in the head or not, because I don't want invalid markup. I know I can mark up the address information in the body of the pages, but the route above would be more efficient. Does anyone have an answer for this?

    Read the article

  • Is it worth replacing mouse by standalone trackpad for heavy code-editing? [on hold]

    - by heltonbiker
    I recently got more interested in improving my tools, workspace and worflow. The first sting came with a sore finger due to a crappy keyboard, and then after some research I fell in love with the "mechanical keyboard is what you need" doctrine, bought one (cherry MX Brown if you're curious), and am very happy with the results. Currently I am replacing my previous text editor (Geany) with Sublime Text 3, and am also very happy and feeling much more powerful and professional :) Well, but while I re-read all the ancient debates about VIM vs whatever-else, the following excerpt from a blog post got me thinking again about the mouse vs keyboard, and the "moving around from the very home row" (in VIM) versus gesturing away with the tiny and unstable mouse cursor: Reaching for a mouse may indeed slow you down, but developers are commonly on machines where the trackpad is a micro-hand movement away. Most novice programmers can click on a character on screen faster than an expert Vimmer can type 20jFp; or LkEEE or /word or any other nasty way Vimmers have to use. The point of a mouse is to make arbitrary on screen jumps efficient, and it’s very good at doing that. Don’t you ever think you can beat a mouse. Well, although there is some bitterness in this statement, it makes a lot of sense, and EVEN MORE if you consider your direct input to be a TRACKPAD conveniently placed in front of your spacebar (which oddly is where I like to put my mouse, rotated 90° ccw, due to a serious tendonitis in my right shoulder, already healed, but you knod...). So, the question is: Has anyone replaced mouse by a standalone trackpad, to work in code editing in a desktop machine (that is, with a sandalone keyboard)? Was it worth the change?

    Read the article

  • EDQ Technical Enablement for OPN (Prague - June 17-19)

    - by milomir.vojvodic
    Oracle Enterprise Data Quality (EDQ) Technical Enablement and Partner Training Trusted Data for Your Enterprise Applications Oracle Enterprise Data Quality helps organizations achieve maximum value from their business-critical applications by delivering fit-for-purpose data. These products also enable individuals and collaborative teams to quickly and easily identify and resolve any problems in underlying data. With Oracle Enterprise Data Quality, customers can identify new opportunities, improve operational efficiency, and more efficiently comply with industry or governmental regulation. Oracle Enterprise Data Quality is designed to serve as a very channel friendly platform to OPN.  This means that pre-built extensions, components and even complete business solutions can readily be built and shared.  This allows our customers/partners to be highly efficient in how they deploy custom business solutions, but also allows our partners to develop specialized components, domain knowledge and even complete business solutions. Training is suitable for: · Database administrators · Architects · Technical staff Objectives of the training: After completing this course, participants should: · Have an understanding of the core functionality of EDQ across profiling, auditing, transforming, parsing and matching data · Be able to describe some of the key capabilities and benefits delivered by EDQ · Be able to create and run standalone EDQ processes and jobs · Be ready to start working with data from customers and (with practice) be able to demonstrate EDQ to customers Agenda 17th June Fundamentals For Demoing (Profile, Audit, Transform and More) Profiling Auditing Transforming Writing and exporting data Jobs and scheduling Publishing, packaging and copying EDQ processes Introduction to the Customer Data Extension Pack Realtime Processing via Web Services The Server Console Run Profiles Data Interfaces Sampling Publishing metrics to the Dashboard Users and security 18th June Matching Matching overview Basic matching configuration Matching rule hierarchies Clustering Merging Reviewing possible matches Outputting Match Data Case study 19th June Address Verification Address Verification Overview Configuration Accuracy Flags Parsing Parsing Overview Phrase profiling Tailoring a CDEP Parser Base Tokenization Classification Reclassification Selection Resolution Register Here Don’t miss this FREE event. Space is limited. Oracle University V Parku 2294/4 148 00 Praha 4 17.6. – 19.6. 2014 09:00 a.m.– 17:30 p.m.

    Read the article

  • Writing generic code when your target is a C compiler

    - by enobayram
    I need to write some algorithms for a PIC micro controller. AFAIK, the official tools support either assembler or a subset of C. My goal is to write the algorithms in a generic and reusable way without losing any runtime or memory performance. And if possible, I would like to do this without increasing the development time much and compromising the readability and maintainability much either. What I mean by generic and reusable is that I don't want to commit to types, array sizes, number of bits in a bit field etc. All these specifications, IMHO, point to C++ templates, but there's no compiler for it for my target. C macro metaprogramming is another option, but, again my opinion, that greatly reduces readability and increases development time. I believe what I'm looking for is a decent C++ to C translator, but I'd like to hear anything else that satisfies the above requirements. Maybe a translator from another high-level language to C that produces very efficient code, maybe something else. Please note that I have nothing against C, I just wish templates were available in it.

    Read the article

  • Is There a Cloud Over OpenWorld?

    - by Tony Berk
    If you have been to OpenWorld in the past, you know it can be overwhelming or at least a bit "large." If this is your first time at OpenWorld, get ready! You are in for a big (or I should say HUGE) treat. The first thing you'll notice when you get to San Francisco is there are a lot of people, buses with "Oracle" posters, large exhibit halls filled with demos, games and tchotchkes from vendors with hot new solutions, and then there are the sessions. Yes, in fact there are over 2000 sessions. How can you possibly sort through 2000 sessions to find the best 20 or so for you? Which are the 1% for you? We will try to help with some insight over the next few weeks.  I'm going start at the highest level. Up in the Clouds! Since I know many people are looking for an update on The Oracle Cloud. We will drill down into the cloud and other topics for CRM and Customer Experience sessions in the next set of posts. Below is a list of some of the Oracle executive keynotes during OpenWorld highlighting The Oracle Cloud and applications related topics (the full list is here). In these sessions you will get details on Oracle's strategy and how Oracle is changing the industry to help our customers be more efficient, effective and innovative. Sunday, September 30 6:00pm - 7:00pm Larry Ellison: Hardware and Software, Engineered to Work Together: Why it's a Different Approach Tuesday, October 2 8:45am - 9:45am Thomas Kurian: The Oracle Cloud: Oracle's Cloud Platform and Application's Strategy Tuesday, October 2 3:30pm - 4:30pm Larry Ellison: The Oracle Cloud: Where Social is Built in Thursday, October 4 9:45am - 10:45am Mark Hurd: See More, Act Faster: Oracle Business Analytics We encourage you to also join the keynotes on the Oracle Database and Cloud Infrastructure and the fascinating partner keynotes, as well. Check the full list on the OpenWorld site. Oh, if you haven't registered yet, what are you waiting for? OpenWorld Registration Details.

    Read the article

  • How to quickly search through a very large list of strings / records on a database

    - by Giorgio
    I have the following problem: I have a database containing more than 2 million records. Each record has a string field X and I want to display a list of records for which field X contains a certain string. Each record is about 500 bytes in size. To make it more concrete: in the GUI of my application I have a text field where I can enter a string. Above the text field I have a table displaying the (first N, e.g. 100) records that match the string in the text field. When I type or delete one character in the text field, the table content must be updated on the fly. I wonder if there is an efficient way of doing this using appropriate index structures and / or caching. As explained above, I only want to display the first N items that match the query. Therefore, for N small enough, it should not be a big issue loading the matching items from the database. Besides, caching items in main memory can make retrieval faster. I think the main problem is how to find the matching items quickly, given the pattern string. Can I rely on some DBMS facilities, or do I have to build some in-memory index myself? Any ideas? EDIT I have run a first experiment. I have split the records into different text files (at most 200 records per file) and put the files in different directories (I used the content of one data field to determine the directory tree). I end up with about 50000 files in about 40000 directories. I have then run Lucene to index the files. Searching for a string with the Lucene demo program is pretty fast. Splitting and indexing took a few minutes: this is totally acceptable for me because it is a static data set that I want to query. The next step is to integrate Lucene in the main program and use the hits returned by Lucene to load the relevant records into main memory.

    Read the article

  • Are small amounts of functional programming understandable by non-FP people?

    - by kd35a
    Case: I'm working at a company, writing an application in Python that is handling a lot of data in arrays. I'm the only developer of this program at the moment, but it will probably be used/modified/extended in the future (1-3 years) by some other programmer, at this moment unknown to me. I will probably not be there directly to help then, but maybe give some support via email if I have time for it. So, as a developer who has learned functional programming (Haskell), I tend to solve, for example, filtering like this: filtered = filter(lambda item: included(item.time, dur), measures) The rest of the code is OO, it's just some small cases where I want to solve it like this, because it is much simpler and more beautiful according to me. Question: Is it OK today to write code like this? How does a developer that hasn't written/learned FP react to code like this? Is it readable? Modifiable? Should I write documentation like explaining to a child what the line does? # Filter out the items from measures for which included(item.time, dur) != True I have asked my boss, and he just says "FP is black magic, but if it works and is the most efficient solution, then it's OK to use it." What is your opinion on this? As a non-FP programmer, how do you react to the code? Is the code "googable" so you can understand what it does? I would love feedback on this :) Edit: I marked phant0m's post as answer, because he gives good advice on how to write the code in a more readable way, and still keep the advantages. But I would also like to recommend superM's post because of his viewpoint as a non-FP programmer.

    Read the article

  • How do I cluster strings based on a relation between two strings?

    - by Tom Wijsman
    If you don't know WEKA, you can try a theoretical answer. I don't need literal code/examples... I have a huge data set of strings in which I want to cluster the strings to find the most related ones, these could as well be seen as duplicate. I already have a set of couples of string for which I know that they are duplicate to each other, so, now I want to do some data mining on those two sets. The result I'm looking for is a system that would return me the possible most relevant couples of strings for which we don't know yet that they are duplicates, I believe that I need clustering for this, which type? Note that I want to base myself on word occurrence comparison, not on interpretation or meaning. Here is an example of two string of which we know they are duplicate (in our vision on them): The weather is really cold and it is raining. It is raining and the weather is really cold. Now, the following strings also exist (most to least relevant, ignoring stop words): Is the weather really that cold today? Rainy days are awful. I see the sunshine outside. The software would return the following two strings as most relevant, which aren't known to be duplicate: The weather is really cold and it is raining. Is the weather really that cold today? Then, I would mark that as duplicate or not duplicate and it would present me with another couple. How do I go to implement this in the most efficient way that I can apply to a large data set?

    Read the article

  • Resurrecting a 5,000 line test plan that is a decade old

    - by ale
    I am currently building a test plan for the system I am working on. The plan is 5,000 lines long and about 10 years old. The structure is like this: 1. test title precondition: some W needs to be set up, X needs to be completed action: do some Y postcondition: message saying Z is displayed 2. ... What is this type of testing called ? Is it useful ? It isn't automated.. the tests would have to be handed to some unlucky person to run through and then the results would have to be given to development. It doesn't seem efficient. Is it worth modernising this method of testing (removing tests for removed features, updating tests where different postconditions happen, ...) or would a whole different approach be more appropriate ? We plan to start unit tests but the software requires so much work to actually get 'units' to test - there are no units at present ! Thank you.

    Read the article

  • Search multiple tables

    - by gilden
    I have developed a web application that is used mainly for archiving all sorts of textual material (documents, references to articles, books, magazines etc.). There can be any given number of archive tables in my system, each with its own schema. The schema can be changed by a moderator through the application (imagine something similar to a really dumbed down version of phpMyAdmin). Users can search for anything from all of the tables. By using FULLTEXT indexes together with substring searching (fields which do not support FULLTEXT indexing) the script inserts the results of a search to a single table and by ordering these results by the similarity measure I can fairly easily return the paginated results. However, this approach has a few problems: substring searching can only count exact results the 50% rule applies to all tables separately and thus, mysql may not return important matches or too naively discards common words. is quite expensive in terms of query numbers and execution time (not an issue right now as there's not a lot of data yet in the tables). normalized data is not even searched for (I have different tables for categories, languages and file attatchments). My planned solution Create a single table having columns similar to id, table_id, row_id, data Every time a new row is created/modified/deleted in any of the data tables this central table also gets updated with the data column containing a concatenation of all the fields in a row. I could then create a single index for Sphinx and use it for doing searches instead. Are there any more efficient solutions or best practises how to approach this? Thanks.

    Read the article

  • Fastest approach to 3D animation

    - by HappyFerret
    I'm currently tasked with designing a small HTML5 game. Having done everything by myself so far (3D models, codebase, game design, etc) I'm now at a point where I'm running out of time. I've less than a day to animate and bind everything together. However, that's exactly my problem. I was under the naive impression that everything would be easier if I went with pre-rendered 3D models. However, I didn't consider the most difficult part. Animation. After having spent over an hour trying to figure out messiahStudio, I figured it's time to ask for outside help. Is there any easier solution to 3D animation than 3D rigging? What I'm basically looking for is some sort of tool that allows me to simply grab and move/deform select polygons. It doesn't have to be as life-like and accurate as rigging, just efficient enough. Were the circumstances any different, I might just learn how to rig. But that's sorely out of scope right now. PS:The models were created in Sculptris but are fairly low-poly.

    Read the article

  • Using Bullet physics engine to find the moment of object contact before penetration

    - by MooMoo
    I would like to use Bullet Physics engine to simulate the objects in 3D world. One of the objects in the world will move using the position from 3D mouse control. I will call it "Mouse Object" and any object in the world as "Object A" I define the time before "mouse object" and "Object A" collide as t-1 The time "mouse object" penetrate "Object A" as t Now there is a problem about rendering the scene because when I move the mouse very fast, "Mouse object" will reside in "Object A" before "Object A" start to move. I would like the "Mouse Object" to stop right away attach to the "Object A". Also If the "Object A" move, the "Mouse object" should move following (attach) the "Object A" without stop at the first collision take place. This is what i did I find the position of the "Mouse Object" at time t-1 and time t. I will name it as pos(t-1) and pos(t) The contact time will be sometime between t-1 to t, which the time of contact I name it as t_contact, therefore the contact position (without penetration) between "Mouse object" and "Object A" will be pos(t_contact) then I create multiple "Mouse object"s using this equation pos[n] = pos(t-1) * C * ( pos(t) - pos(t-1) ) where 0 <= C <= 1 if I choose C = 0.1, 0.2, 0.3,0.4..... 1.0, I will get pos[n] for 10 values Then I test collision for all of these 10 "Mouse Objects" and choose the one that seperate between "no collision" and "collision". I feel this method is super non-efficient. I am not sure the way other people find the time-of-contact or the position-of-contact when "Object A" can move.

    Read the article

  • Sprite batching in OpenGL

    - by Roy T.
    I've got a JAVA based game with an OpenGL rendering front that is drawing a large amount of sprites every frame (during testing it peaked at 700). Now this game is completely unoptimized. There is no spatial partitioning (so a sprite is drawn even if it isn't on screen) and every sprite is drawn separately like this: graphics.glPushMatrix(); { graphics.glTranslated(x, y, 0.0); graphics.glRotated(degrees, 0, 0, 1); graphics.glBegin(GL2.GL_QUADS); graphics.glTexCoord2f (1.0f, 0.0f); graphics.glVertex2d(half_size , half_size); // upper right // same for upper left, lower left, lower right graphics.glEnd(); } graphics.glPopMatrix(); Currently the game is running at +-25FPS and is CPU bound. I would like to improve performance by adding spatial partitioning (which I know how to do) and sprite batching. Not drawing sprites that aren't on screen will help a lot, however since players can zoom out it won't help enough, hence the need for batching. However sprite batching in OpenGL is a bit of mystery to me. I usually work with XNA where a few classes to do this are built in. But in OpenGL I don't know what to do. As for further optimization, the game I'm working on as a few interesting characteristics. A lot of sprites have the same texture and all the sprites are square. Maybe these characteristics will help determine an efficient batching technique?

    Read the article

  • EmblaCom Oy Maximizes Database Availability and Reduces Costs with MySQL Cluster

    - by Bertrand Matthelié
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Headquartered in Finland, EmblaCom Oy provides turnkey and cloud-hosted voice solutions to mobile operators around the globe. Since launching the original mobile private branch exchange (PBX) in 1998, the company has focused on helping its partners provide efficient voice communications to their key business customers. The company’s voice solutions are used by millions of subscribers, worldwide. EmblaCom Oy needed to replace several database engines with a standardized, scalable, development-friendly database solution to maximize availability and cut costs. The company chose MySQL Cluster Carrier Grade Edition, which has maximized accessibility to EmblaCom’s services for its clients and their hundreds of thousands of subscribers. The initiative has also reduced, by half, the cost of the database solution installation for customers, as well as lowered maintenance and customer service costs. Read the entire case study here.

    Read the article

  • New whitepaper: Evolution from the Traditional Data Center to Exalogic: An Operational Perspective

    - by Javier Puerta
    IT organizations are struggling with the need to balance the day-to-day concerns of data center management against the business level requirements to deliver long-term value. This balancing act has proven difficult and inefficient: systems and application management tools are resource intensive and traditional infrastructure management architectures have developed over time on a project by project basis. These traditional management systems consist of multiple tools that require administrators to waste time performing too many steps to handle routine administrative tasks. Operational efficiency and agility in your enterprise are directly linked to the capabilities provided by the management layer across the entire stack, from the application, middleware, operating system, compute, network and storage. Only when this end to end capability is provided will we experience the full benefit of a scalable, efficient, responsive and secure datacenter. Managing Exalogic is substantially less complex and error prone than managing traditional systems built from individually sourced, multi-vendor components because Exalogic is designed to be administered and maintained as a single, integrated system (Figure 1). It is at the forefront of the industry-wide shift away from costly and inferior one-off platforms toward private clouds and Engineered Systems. Read the full whitepaper "Evolution from the Traditional Data Center to Exalogic: An Operational Perspective". Full document is available for download at the Exadata Partner Community Collaborative Workspace (for community members only - if you get an error message, please register for the Community first).

    Read the article

  • New Whitepaper: Evolution from the Traditional Data Center to Exalogic: An Operational Perspective

    - by Javier Puerta
    IT organizations are struggling with the need to balance the day-to-day concerns of data center management against the business level requirements to deliver long-term value. This balancing act has proven difficult and inefficient: systems and application management tools are resource intensive and traditional infrastructure management architectures have developed over time on a project by project basis. These traditional management systems consist of multiple tools that require administrators to waste time performing too many steps to handle routine administrative tasks. Operational efficiency and agility in your enterprise are directly linked to the capabilities provided by the management layer across the entire stack, from the application, middleware, operating system, compute, network and storage. Only when this end to end capability is provided will we experience the full benefit of a scalable, efficient, responsive and secure datacenter. Managing Exalogic is substantially less complex and error prone than managing traditional systems built from individually sourced, multi-vendor components because Exalogic is designed to be administered and maintained as a single, integrated system (Figure 1). It is at the forefront of the industry-wide shift away from costly and inferior one-off platforms toward private clouds and Engineered Systems. Read the full whitepaper "Evolution from the Traditional Data Center to Exalogic: An Operational Perspective". Full document is available for download at the Exadata Partner Community Collaborative Workspace (for community members only - if you get an error message, please register for the Community first).

    Read the article

  • How to model an address type in DDD?

    - by Songo
    I have an User entity that has a Set of Address where Address is a value object: class User{ ... private Set<Address> addresses; ... public setAddresses(Set<Address> addresses){ //set all addresses as a batch } ... } A User can have a home address and a work address, so I should have something that acts as a look up in the database: tbl_address_type ------------------------------------------------ | address_type_id | address_type | ------------------------------------------------ | 1 | work | ------------------------------------------------ | 2 | home | ------------------------------------------------ and correspondingly tbl_address ------------------------------------------------------------------------------------- | address_id | address_description |address_type_id| user_id | ------------------------------------------------------------------------------------- | 1 | 123 main street | 1 | 100 | ------------------------------------------------------------------------------------- | 2 | 456 another street | 1 | 100 | ------------------------------------------------------------------------------------- | 3 | 789 long street | 2 | 200 | ------------------------------------------------------------------------------------- | 4 | 023 short street | 2 | 200 | ------------------------------------------------------------------------------------- Should the address type be modeled as an Entity or Value type? and Why? Is it OK for the Address Value object to hold a reference to the Entity AdressType (in case it was modeled as an entity)? Is this something feasible using Hibernate/NHibernate? If a user can change his home address, should I expose a User.updateHomeAddress(Address homeAddress) function on the User entity itself? How can I enforce that the client passes a Home address and not a work address in this case? (a sample implementation is most welcomed) If I want to get the User's home address via User.getHomeAddress() function, must I load the whole addresses array then loop it and check each for its type till I found the correct type then return it? Is there a more efficient way than this?

    Read the article

  • Checking validation of entries in a Sudoku game written in Java

    - by Mico0
    I'm building a simple Sudoku game in Java which is based on a matrix (an array[9][9]) and I need to validate my board state according to these rules: all rows have 1-9 digits all columns have 1-9 digits. each 3x3 grid has 1-9 digits. This function should be efficient as possible for example if first case is not valid I believe there's no need to check other cases and so on (correct me if I'm wrong). When I tried doing this I had a conflict. Should I do one large for loop and inside check columns and row (in two other loops) or should I do each test separately and verify every case by it's own? (Please don't suggest too advanced solutions with other class/object helpers.) This is what I thought about: Main validating function (which I want pretty clean): public boolean testBoard() { boolean isBoardValid = false; if (validRows()) { if (validColumns()) { if (validCube()) { isBoardValid = true; } } } return isBoardValid; } Different methods to do the specific test such as: private boolean validRows() { int rowsDigitsCount = 0; for (int num = 1; num <= 9; num++) { boolean foundDigit = false; for (int row = 0; (row < board.length) && (!foundDigit); row++) { for (int col = 0; col < board[row].length; col++) { if (board[row][col] == num) { rowsDigitsCount++; foundDigit = true; break; } } } } return rowsDigitsCount == 9 ? true : false; } I don't know if I should keep doing tests separately because it looks like I'm duplicating my code.

    Read the article

< Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >