Search Results

Search found 7740 results on 310 pages for 'split sound'.

Page 193/310 | < Previous Page | 189 190 191 192 193 194 195 196 197 198 199 200  | Next Page >

  • Downclock CPU reaching critical temperature

    - by Draga
    I have an HP tx1250 laptop. It always had serious overheating problems and although usually it runs fine I'm now running a continuous test for my dissertation, this brings the CPU temp close to the critical and from time to time the computer shutdown for reaching it (checked the log). I use to have the same problem on Win XP but I noticed Win Vista and 7 downclock the CPU when is necessary to cool it down so I was thinking if the same is possible on Ubuntu 12. The only program I've found that may do the job is computer temp ( http://computertemp.berlios.de/ ) but it doesn't seems to work under Ubuntu 12. The inside of the laptop is fairly clean, the thermal paste is quite recent, I'm keeping it lifted from the desk and judjung by the sound of the fan that's running fine as well. The pc in now running between 78 and 91 degrees C but about once a day it shut down for reaching 95. I need the results of the test it's running pretty soon so it's important that it runs non-stop. I've though to set the maximum clock of the CPU to slightly less the maximum but then these tests I'm running would take much more time. Thanks! PS: first and last HP laptop for me.

    Read the article

  • Good practice on Visual Studio Solutions

    - by JonWillis
    Hopefully a relativity simple question. I'm starting work on a new internal project to create tractability of repaired devices within the buildings. The database is stored remotely on a webserver, and will be accessed via web API (JSON output) and protected with OAuth. The front end GUI is being done in WPF, and the business code in C#. From this, I see the different layers Presentation/Application/Datastore. There will be code for managing all the authenticated calls to the API, class to represent entities (business objects), classes to construct the entities (business objects), parts for WPF GUI, parts of the WPF viewmodels, and so on. Is it best to create this in a single project, or split them into individual projects? In my heart I say it should be multiple projects. I have done it both ways previously, and found testing to be easier with a single project solution, however with multiple projects then recursive dependencies can crop up. Especially when classes have interfaces to make it easier to test, I've found things can become awkward.

    Read the article

  • UDK - How to make sure a PhysicalMaterial mask actually works?

    - by tomacmuni
    Hello, I have been reading the documentation for UDK about physical materials and masks. I have my 1bit BMP mask, and the two physical material assets I want to shoot off in the black and white channels. I have applied my material to both a rigid body and to a skeletal mesh and neither apparently uses the mask. If I assign a regular physical material (one that doesn't use a mask) then it will work fine, but this defeats the point because it gives only one hit reaction. In the documentation it states that it is possible to extend a class on which we want to use a physical material based on the KActor class's usage. How to do that? Here is the quote: "The following properties [ie, ImpactEffect - Particle system to spawn at the point of impact + ImpactSound - Sound to play when an impact occurs] allow you to attach sounds and effects to physical collisions. These only work on classes which support them, which at the moment is only KActor. By looking at the implementation in KActor though, you can add this functionality to other classes (or you can subclass KActor)." Essentially, how to make sure a PhysicalMaterial mask actually works? What code could be added to a skeletal mesh class perhaps, to get it going? Any help appreciated.

    Read the article

  • How to quickly search through a very large list of strings / records on a database

    - by Giorgio
    I have the following problem: I have a database containing more than 2 million records. Each record has a string field X and I want to display a list of records for which field X contains a certain string. Each record is about 500 bytes in size. To make it more concrete: in the GUI of my application I have a text field where I can enter a string. Above the text field I have a table displaying the (first N, e.g. 100) records that match the string in the text field. When I type or delete one character in the text field, the table content must be updated on the fly. I wonder if there is an efficient way of doing this using appropriate index structures and / or caching. As explained above, I only want to display the first N items that match the query. Therefore, for N small enough, it should not be a big issue loading the matching items from the database. Besides, caching items in main memory can make retrieval faster. I think the main problem is how to find the matching items quickly, given the pattern string. Can I rely on some DBMS facilities, or do I have to build some in-memory index myself? Any ideas? EDIT I have run a first experiment. I have split the records into different text files (at most 200 records per file) and put the files in different directories (I used the content of one data field to determine the directory tree). I end up with about 50000 files in about 40000 directories. I have then run Lucene to index the files. Searching for a string with the Lucene demo program is pretty fast. Splitting and indexing took a few minutes: this is totally acceptable for me because it is a static data set that I want to query. The next step is to integrate Lucene in the main program and use the hits returned by Lucene to load the relevant records into main memory.

    Read the article

  • Can you use programming for a greater good? [closed]

    - by jdoig
    What are the paths one could take to use their programming skills to benefit mankind (good causes, scientific or medical advancement, etc)? Problem: I dropped out of school, learnt programming, on my own, from text books and the internet. I have 7+ years of commercial experience from web applications to big data to mobile apps. But all I seem to do is make rich people richer with the vain hope that one day I'll be the guy with the good idea using other people to make myself richer. I googled for simular posts on the subject and saw a lot of people saying... "Just do your 9-5 job and donate a lot to charity"... I'm sorry to sound selfish but thats not what makes me tick; I need to be invested in and excited about the project at hand; it's not only got to be for a greater good but it's got to kick arse and feel good doing it too... Does that kind of job exist? Does it involve programming? What other skills do I need? (Apologies if this question is too 'fluffy' or 'wishy-washy', but if it is a pointer to where else I could ask it would be appreciated)

    Read the article

  • An online version of ClearTrace

    - by Bill Graziano
    When I visit clients for the first time and conduct a performance review I introduce them to ClearTrace. It’s still the best way I know to identify exactly which queries are consuming the most resources.  The downside is that it needs to be downloaded and create a database to store the results.  I finally decided it would be easier if I could just upload a trace immediately. You can find the online version of ClearTrace at TraceTune.com.  It provides a simple way to upload a trace file and see exactly which stored procedures or SQL statements consume the most CPU and disk.   This is still a work in progress as I try to determine exactly which features from ClearTrace are important.  I’ve also limited the file upload to 10MB in this beta release.  That might not sound like much but I get over 20,000 events using this stored procedure to generate the trace. If you’re looking for something to do on a Friday, I’d suggest a little performance tuning.  Generating 10MB of trace data doesn’t take long at all and in a short time you’ll see exactly which SQL statements you need to tune first.

    Read the article

  • RequireJS: JavaScript for the Enterprise

    - by Geertjan
    I made a small introduction to RequireJS via some of the many cool new RequireJS features in NetBeans IDE. I believe RequireJS, and the modularity and encapsulation and loading solutions that it brings, provides the tools needed for creating large JavaScript applications, i.e., enterprise JavaScript applications. &amp;amp;lt;span id=&amp;amp;quot;XinhaEditingPostion&amp;amp;quot;&amp;amp;gt;&amp;amp;lt;/span&amp;amp;gt; (Sorry for the wobbly sound in the above.) An interesting comment by my colleague John Brock on the above: One other advantage that RequireJS brings, is called lazy loading of resources. In your first example, everyone one of those .js files is loaded when the first file is loaded in the browser. By using the require() call in your modules, your application will only load the javascript modules when they are actually needed. It makes for faster startup in large applications. You could show this by showing the libraries that are loaded in the Network Monitor window. So I did as suggested: Click the screenshot to enlarge it and notice how the Network Monitor is helpful in the context of RequireJS troubleshooting.

    Read the article

  • How to avoid the GameManager god object?

    - by lorancou
    I just read an answer to a question about structuring game code. It made me wonder about the ubiquitous GameManager class, and how it often becomes an issue in a production environment. Let me describe this. First, there's prototyping. Nobody cares about writing great code, we just try to get something running to see if the gameplay adds up. Then there's a greenlight, and in an effort to clean things up, somebody writes a GameManager. Probably to hold a bunch of GameStates, maybe to store a few GameObjects, nothing big, really. A cute, little, manager. In the peaceful realm of pre-production, the game is shaping up nicely. Coders have proper nights of sleep and plenty of ideas to architecture the thing with Great Design Patterns. Then production starts and soon, of course, there is crunch time. Balanced diet is long gone, the bug tracker is cracking with issues, people are stressed and the game has to be released yesterday. At that point, usually, the GameManager is a real big mess (to stay polite). The reason for that is simple. After all, when writing a game, well... all the source code is actually here to manage the game. It's easy to just add this little extra feature or bugfix in the GameManager, where everything else is already stored anyway. When time becomes an issue, no way to write a separate class, or to split this giant manager into sub-managers. Of course this is a classical anti-pattern: the god object. It's a bad thing, a pain to merge, a pain to maintain, a pain to understand, a pain to transform. What would you suggest to prevent this from happening?

    Read the article

  • Bayesian content filter for vbulletin [on hold]

    - by mc0e
    I've been tasked with coming up with a tool to automatically flag some posts for moderator attention on a large vbulletin forum. It's not spam per se, but the task has a lot in common with the sort of handling that might be done by a spam protection plugin (a mod in vbulletin speak). There's only so much I can say, but the task does not involve bad users, so much as particular kinds of posts which the moderators need to be aware of. Filtering out user registrations and links is therefore not useful, and we are talking about posts by real human users. What I'm looking for is an existing bayesian classification plugin, or something that I can study to get an understanding of how to do the vbulletin side of the interface in order to build such a thing. Ie I'd need ways for moderators to list flagged posts, and to correct the classification of posts which have been mis-classified. Ideally I want a 3 way split with an "unsure" category in order to reduce what has to be reviewed to find any mis-classifications. Any pointers? I've searched around a bit, and so far what I've found has been more or less entirely targetted at intervening in sign-ups (mostly using stopforumspam), captchas, and use of external services like akismet which are spam specific. I'm also considering an external solution, which might be ableto be interfaced i

    Read the article

  • Conventions for search result scoring

    - by DeaconDesperado
    I assume this type of question is more on-topic here than on regular SO. I have been working on a search feature for my team's web application and have had a lot of success building a multithreaded, "divide and conquer" processing system to work through a large amount of fulltext. Our problem domain is pretty specific. Users of the app generate posts, and as a general rule, posts that are more recent are considered to be of greater relevance. Some of the data we are trying to extract from search is very specific (user's feelings about specific items or things) and we are using python nltk to do named-entity extraction to find interesting likely query terms. Essentially we look for descriptive adjective-noun pairs and generate a general picture of a user's expressed sentiment as a list of tokens. This search is intended as an internal tool for our team to draw out a local picture of sentiments like "soggy pizza." There's some machine learning in there too to do entity resolution on terms like "soggy" to all manner of adjectives expressing nastiness. My problem is I am at a loss for how to go about scoring these results. The text being searched is split up into tokens in a list, so my initial approach would be to normalize a float score between 0.0-1.0 generated off of how far into the list the terms appear and how often they are repeated (a later mention of the term being worth less, earlier more, greater frequency-greater score, etc.) A certain amount of weight could be given to the timestamp as well, though I am not certain how to calculate this. I am curious if anyone has had to solve a similar problem in a search relevance grading between appreciable metrics (frequency, term location/colocation, recency) and if there are and guidelines for how to weight each. I should mention as well that the final fallback procedure in the search is to pipe the query to Sphinx, which has its own scoring practices. Sphinx operates as the last resort in case our application specific processing can't find any eligible candidates.

    Read the article

  • Cross-platform builds with OGRE3D via CMake. Any tips?

    - by frarees
    I've been trying to compile a simple project for both OSX and Windows platforms, using OGRE3D, but I've got some problems on the way. I'm using CMake to create my platform specific project files (VS solution & Xcode project). Some problems I found are: OGRE3D source is distributed in 2 flavors, Windows sources and UNIX/OSX sources. In OSX, compiling dependencies (freetype, FreeImage and specially OIS) is such a pain. I don't know how to handle precompiled dependencies (they exist for both Win & Mac). May sound like a noob question, but I would appreciate some tips on this. Resources, forum posts, anything. There exists any "cross-platform base project for OGRE3D" on the net? Would be really helpful if someone who already managed to do this can bring some light. Btw, I'm not basing the project on OGRE3D, it's just that is the biggest library I'm probably using, so I depend a lot on it. Thanks in advantage!

    Read the article

  • Noise Canceling Earphones

    - by Mark Treadwell
    I travel a lot. The hours spent droning through the sky can be made more tolerable with an MP3 player and a set of noise-cancelling headphones. Reducing the sound of the airflow and engines is a great relief. For a year or two, I used a pair of folding Sony MDR-NC5 Noise Canceling Headphones, the ear foam covers self-destructed. I replaced them with old washcloth material and was happy, but the DW thought it looked bad.  I switched to a new set of Sony MDR-NC6 Noise Canceling Headphones.  These worked equally well, although they did not fold as small as the MDR-NC5 headphones. Over four years of use, the MDR-NC6 headphones started cutting out and making popping noises.  This was not surprising considering the beating they took on travel in my backpack.  It looked like I needed another new set. The older MDR-NC5 headphones were still on the shelf with the hated washcloth covers.  A quick search online showed a vibrant business in selling replacement ear foams, often at exorbitant prices.  Nowhere did I see ear foam covers made for the oblong MDR-NC5.  I then realized that foam is stretchable and that the shape should not matter.  After another search and some consideration, I purchased 2-5/16" foam pad ear covers that were able to stretch over the MDR-NC5's strange shape.  Problem solved for less than $5.

    Read the article

  • What's the best way to handle numerous recurring log entries in game loop?

    - by Kaa
    I have a custom logging system, use of which is scattered all over the engine and game. The system is linked to a "LogStore" that has an std::vector<string> logs[NUM_LOG_TYPES] - each vector corresponds with it's log type (info, error, debug, etc.). There's one extra std::vector that has "coordinates" to all log entries in the order they were received. Now, all the logging output is also displayed inside my development console in the game. The game console is handled by HTML-type GUI and therefore requires a new <p> element being added for each log output. My problem is that the log entries that are generated in the main loop each frame freeze the engine, because they continue to add elements to the in-game console, and if the console or guy generates a warning - that creates an infinite logging loop. I want to solve it by handling the recurring log entries in an elegant way that lets you know that something is critically wrong, but won't freeze the engine - like displaying the count of errors in the last 60 frames instead of displaying errors themselves. But how do you guys handle this? Does anyone know any nifty tricks to do this? I understand the question may sound vague, but if someone came across this type of issue I'm sure they would know exactly what's happening. Example problematic log entries: OpenGL warnings (I actually do check for errors every frame in many places) Really any prints anywhere in the main loop (may be debugging, may be warnings)

    Read the article

  • HTML5 - Does it have the power to handle a large 2D game with a huge world?

    - by user15858
    I have been using XNA game studio, but due to private reasons (as well as the ability to publish anywhere & my heavy interest in isogenic engine), I would like to switch to HTML5. However, I have very high 2D graphic demands for my game. The game itself will have a HDD size of anywhere between 6GB (min) to 12GB (max) which would be a full game deployed offline. The size of the images aren't significantly large, so streaming would be entirely possible if only those assets required were streamed as needed. The game has a massive file size because of the sheer amount of content. For some images or spritesheets, they would be quite massive. (ex. a very large Dragon, which if animated in a spritesheet would be split into two 4096x4096 sheets or one 8192x8192 sheet). Most assets would be very small, and about 7MB for a full character with 15 animations in every direction (all animations not required immediately) so in the size of a few hundred KB to download before the game loads. My question, however, is if the graphical power of HTML5 is enough to animate several characters on screen at once, when it flips through frames quite rapidly. All my sprites have about 25 frames per animation, 5 directions (a spritesheet for each direction & animation), and run at 30fps. Upon changing direction, animation, or a new character entering, spritesheets would change and be constantly loading/unloading. If I pack all directions in a single sheet, it would be about 2048x2048 per sheet. Most frameworks have no problem with this, but I am afraid from what I read that HTML5's graphical capabilities will limit me. Since it takes significant time simply to animate characters in any language, I'd like a quick answer.

    Read the article

  • Laser range finder, what language to use? Beginner advice

    - by DrOnline
    I hope this is the right place. I am a programming beginner, and I want to make a laser range finder, and I need advice about how to proceed etc. In a few weeks I will get a lot of dirt cheap 3-5V lasers and some cheap usb webcams. I will point the laser and webcam in parallel, and somehow use trigonometry and programming to determined distance. I have seen online that others made done it this way, I have purposefully not looked at the details too much because I want to develop it on my own, and learn, but I know the general outline. I have a general idea of how to proceed. The program loads in a picture from the webcam, and I dunno how images work really, but I imagine there is a format that is basically an array of RGB values.. is this right? I will load in the red values, and find the most red one. I know the height difference between the laser and the cam. I know the center dot in the image, I know the redmost dot. I'm sure there's some way to figure out some range there. TO THE POINT: 1) Is my reasoning sound thus far, especially in terms of image analysis? I don't need complete solutions, just general points 2) What I need to figure out, is what platform to use. I have an arduino... apparently, I've read it's too weak to process images. Read that online. I know some C I know some Python I have Matlab. Which is the best option? I do not need high sampling rates, I have not decided on whether it should be automated or whether I should make a GUI with a button to press for samples. I will keep it simple and expand I think. I also do not need it to be super accurate, I'm just having fun here. Advice!

    Read the article

  • Game Asset Management

    - by user964123
    I am making my first small mobile game in C# XNA. Lets say I have 3 screens, the main menu, options and game screen. A single game session usually lasts for 1 min, so the user will alternate frequently between the main menu and game screen. Therefore, once I load the textures for either screen, I want to keep them in memory to avoid frequent reloading. Both screens share some assets like their background textures, but differ in others. The first solution I came up with is making 2 texture factory classes, MainScreenAssetFactory and GameScreenAssetFactory, each with their own content manager, and ill store them in a globally accessible point so that they persist after either screen is destroyed. There is also a OptionsScreenAssetFactory, but that I dont want to cache it since the options screen is rarely visited. A typical Factory would look something like this public class MainScreenAssetFactory { private readonly ContentManager contentManager; public MainScreenAssetFactory(IServiceProvider serviceProvider, string rootDirectory) { contentManager = new ContentManager(serviceProvider) { RootDirectory = rootDirectory }; } public Texture2D ListElementBackground { get { return return contentManager.Load<Texture2D>("UserTab"); } } public Texture2D ListElementBulletPoint { get { return return contentManager.Load<Texture2D>("TabIcon"); } } public Texture2D LoggedOutUser { get { return return contentManager.Load<Texture2D>("LoggedOutUser"); } } } Since both Main, Options and Game Screen share some common resources, instead of loading them more than once, I created another class CommonAssetTexFactory which holds the common stuff and stays in-memory during the app lifetime. For example, this class gets passed to the options screen when it is created. However, given my small game with its few assets, I am already finding this solution cumbersome and inflexible. Changing anything would require looking to see if its already in the common factory, and if not, modifying existing factories and so on. And this is just considering textures currently, i didnt add sound files yet. I cant imagine bigger games with thousands of resources using this approach. A better idea must exist. Would someone please enlighten me?

    Read the article

  • Role based access to resources for a RESTful service

    - by mutex
    I'm still wrapping my head around REST, but I wonder if someone can help with any suggestions or approaches to role based access control for a RESTful service, particularly from the point of view of securing the data and how the URLs might look. It's probably best to consider an example: Say I have a REST service for Customers, and want to split the users of this REST service into Admin, Editor and Reader roles: Admins can change all attributes of a Customer resource Editors can change only some Readers can only view them. Access control rights are assigned to the Customers entities individually. So for example a user of the service might have admin rights to Customers 1,2 and 3 but Editor access to 4,5 and Reader access to 7,8,9. Now consider the user calling the service. What is a good way to seperate the list of Customers for the current User? GET /Customer - this might get a list of all customers that the current user has Admin\Editor\Reader access to. But then on each Customer the consumer would need an indication of what role they have. Or would it be "better" having something like GET /Customer/Admin - return all customers the current user has Admin access to. Just looking for some high level pointers or reading on a decent way to secure\filter the resources based on roles of the current user.

    Read the article

  • CI - How long is continous?

    - by Andy
    We currently are using CCNet as our continous integration server. Most projects check for changes every 30 seconds (the default) and if needed perform a build (unit tests, stylecop, fxcop, etc). We've gotten quite a few projects now, and the server spends most of its time near 100% cpu utilization. This has alarmed some of the development team, even though the server is responsive and builds are still about the same length of time they've always been. Its been suggested that we lower the check interval to about five minutes. To me that seems too long, and we risk people committing code and then going home for the weekend and now there's a broken build possibly holding up others. In response, the suggestion is that if someone needs to know the results they can force the build. But that seems to defeat the purpose of CI, as I thought it was supposed to be automated. My proposed solution is just to get another build server and split the builds amongst the servers. Am I thinking about this the wrong way, or is there a point where if integration isn't often enough you're not really doing CI anymore?

    Read the article

  • Issue with Banshee

    - by Jordan March
    I went to install Banshee using Ubuntu tweak's App button. I clicked the stable ppa, and installed. When it was done, i tried to launch it, it shows up for a split second then closes. So I though it might be having issues with Rhythmbox's associations or something. So I uninstalled Rhythmbox. But same issue. So I try to uninstall Banshee, and it won't, it fails. I thought, well maybe if I tried installing it from the terminal, adding the ppa first (sudo add-apt-repository ppa:banshee-team/ppa) updating apt-get, then sudo apt-get install banshee. But it fails. So now I can't run it, uninstall it, re install it.. or anything. I'm new to linux so I don't know how to go about rebuilding the files. But if anyone has seen this issue before, or has any idea on how to fix this, I would really appreciate the help. Thanks guys

    Read the article

  • Identifying elements from data feeds generated by affiliate sites

    - by SPI
    I am working with data feeds from affiliate sites. The basic idea is to provide an interface where the user can paste a link to an XML datafeed (these are huge btw, around 60 mb) that would then be streamed, parsed into small chunks, and mined for the required data which would then be stored in the database. The problem is that different affiliate sites have different Schemas for their XML's. It is a little hard mapping the elements in an XML to your database attributes when you don't actually know which element contains what. My Solution: Use XPath to traverse through the first set of parent and it's descendent's, fetch the elements as well as the data and and ask the user to map this data to the attributes in the database by selecting from a set of radio buttons that represent the attributes from the database. This will be done just once for each new Feed, once the system know's what's what it will automatically upload the data from the XML to the database. Does this sound viable? Is there a better solution? I realize this leaves an uncomfortable opening for human error.. Thanks.

    Read the article

  • In C++ Good reasons for NOT using symmetrical memory management (i.e. new and delete)

    - by Jim G
    I try to learn C++ and programming in general. Currently I am studying open source with help of UML. Learning is my hobby and great one too. My understanding of memory allocation in C++ is that it should be symmetrical. A class is responsible for its resources. If memory is allocated using new it should be returned using delete in the same class. It is like in a library you, the class, are responsibility for the books you have borrowed and you return them then you are done. This, in my mind, makes sense. It makes memory management more manageable so to speak. So far so good. The problem is that this is not how it works in the real world. In Qt for instance, you create QtObjects with new and then hand over the ownership of the object to Qt. In other words you create QtObjects and Qt destroys them for you. Thus unsymmetrical memory management. Obviously the people behind Qt must have a good reason for doing this. It must be beneficial in some kind of way, My questions is: What is the problem with Bjarne Stroustrups idea about a symmetrical memory management contained within a class? What do you gain by splitting new and delete so you create an object and destroy it in different classes like you do in Qt. Is it common to split new and delete and why in such case, in other projects not involving Qt? Thanks for any help shedding light on this mystery!

    Read the article

  • Full Text Search Strategy For My Website

    - by Hosea146
    I have a website that allows users to search for items in various categories. Each category is a separate area (page) of my website. For example, some categories might be cars, bikes, books etc. At the moment a user has to search for an item by going to the page (for example, cars) and searching for the car they want. I would like to allow the user to search for anything on my site, from my main home page. At the moment, each page (category) has its own set of tables, and I don't really want to turn Full Text Search on for each table (20+ of them) and search each table individually when a search is done. This is going to be slow and tedious. What I'm thinking of doing is creating a single table that will hold all searchable information for each category of item (when an item is saved in its respective table, I would copy all searchable information over to my 'Search' table). I would then turn Full Text Search on for that table, and search that table. Does this sound reasonable? Is there a better way? I've never used Full Text Search before, so this is new to me.

    Read the article

  • What is the best way to manage large 3d worlds (i.e minecraft style)?

    - by SomeXnaChump
    After playing minecraft I was marvelling a bit at their large worlds but at the same time finding it extremely slow to navigate, even with a quad core and meaty graphics card. Now I assume its fairly slow because: A) Its written in Java, and as most of the actual spatial partitioning and other memory management activities happen in there it would be slower than a native C++ version. B) They are not partitioning their world very well I could be wrong on both assumptions, however it got me thinking about the best way to manage large worlds. As it is more of a true 3d world, where a block can exist in any part of the world, it is basically a big 3d array [x][y][z], where each block in the world has a type (i.e BlockType.Empty = 0, BlockType.Dirt = 1 etc). Now I am assuming to make this sort of world performant you would need to: a) Use a tree of some variety (oct/kd/bsp) to split all the cubes out, it seems like an oct/kd would be the better option as you can just partition on a per cube level not a per triangle level. b) Use some algorithm to work out if the blocks within the scene can currently be seen, as blocks closer to the user could obfuscate the blocks behind, making it pointless to render them. c) Keep the block object themselves lightweight, so it is quick to add and remove them from the trees I guess there is no right answer to this, but I would be interested to see peoples opinions on the subject.

    Read the article

  • What is the best approach for deploying apps to companies

    - by Supercell
    What is the best approach for the following scenario: 1) A publicly available app (available in app stores) which is used by end users to make use of services offered by multiple companies. 2) These companies maintain their services also using a mobile app. I'm not sure on how to solve the second part. Having one app for both enduser and admin functionality, secured by username/password doesn't sound like a good idea. This would leave the only option of developing a separate admin application for the companies. What is the best approach to deploy "admin" like mobile apps to companies only, for Android, iOS and Windows Phone? Some additional information: Public App ---- Servers ----- Multiple Company Apps The public app shows all companies offering their services. An end user uses the public app to order something from a specific company. The order is sent to our servers. Our servers send the order to the associated company. This order is displayed on the company's admin app and given the option to accept the order.

    Read the article

  • How do you coordinate with co-workers to give a balanced interview?

    - by goldierox
    My company has been conducting a lot of interviews lately for candidates with various experience levels, ranging from interns to senior candidates. We put our candidates through five 45 minute interview sessions where we try to ask a range of questions. One person always asks the same questions that test logic and communication. The rest typically split time between a whiteboard coding question and a discussion of previous projects, technologies the interviewee has worked with, and what he/she is looking for a job. Generally, we know the range of questions that other people on the loop will ask. Sometimes we switch things up and end up having redundancies. Today, 3 interviewers asked tree-related questions. Other times, we've all honed in on the same project on a resume and have had the interviewee talk about it with everyone. I think a smooth interview process would help us learn more about the candidate while giving the impression to the candidate that we have our act together as a team. How do you coordinate with others in the interview loop to give a balanced interview?

    Read the article

< Previous Page | 189 190 191 192 193 194 195 196 197 198 199 200  | Next Page >