Search Results

Search found 5756 results on 231 pages for 'cpu utilization'.

Page 162/231 | < Previous Page | 158 159 160 161 162 163 164 165 166 167 168 169  | Next Page >

  • What are the possible options for AI path-finding etc when the world is "partitionned"?

    - by Sebastien Diot
    If you anticipate a large persistent game world, and you don't want to end up with some game server crashing due to overload, then you have to design from the ground up a game world that is partitioned in chunks. This is in particular true if you want to run your game servers in the cloud, where each individual VM is relatively week, and memory and CPU are at a premium. I think the biggest challenge here is that the player receives all the parts around the location of the avatar, but mobs/monsters are normally located in the server itself, and can only directly access the data about the part of the world that the server own. So how can we make the AI behave realistically in that context? It can send queries to the other servers that own the neighboring parts, but that sounds rather network intensive and latency prone. It would probably be more performant for each mob AI to be spread over the neighboring parts, and proactively send the relevant info to the part that contains the actual mob atm. That would also reduce the stress in a mob crossing a border between two parts, and therefore "switching server". Have you heard of any AI design that solves those issues? Some kind of distributed AI brain? Maybe some kind of "agent" community working together through message passing?

    Read the article

  • 2D platformers: why make the physics dependent on the framerate?

    - by Archagon
    "Super Meat Boy" is a difficult platformer that recently came out for PC, requiring exceptional control and pixel-perfect jumping. The physics code in the game is dependent on the framerate, which is locked to 60fps; this means that if your computer can't run the game at full speed, the physics will go insane, causing (among other things) your character to run slower and fall through the ground. Furthermore, if vsync is off, the game runs extremely fast. Could those experienced with 2D game programming help explain why the game was coded this way? Wouldn't a physics loop running at a constant rate be a better solution? (Actually, I think a physics loop is used for parts of the game, since some of the entities continue to move normally regardless of the framerate. Your character, on the other hand, runs exactly [fps/60] as fast.) What bothers me about this implementation is the loss of abstraction between the game engine and the graphics rendering, which depends on system-specific things like the monitor, graphics card, and CPU. If, for whatever reason, your computer can't handle vsync, or can't run the game at exactly 60fps, it'll break spectacularly. Why should the rendering step in any way influence the physics calculations? (Most games nowadays would either slow down the game or skip frames.) On the other hand, I understand that old-school platformers on the NES and SNES depended on a fixed framerate for much of their control and physics. Why is this, and would it be possible to create a patformer in that vein without having the framerate dependency? Is there necessarily a loss of precision if you separate the graphics rendering from the rest of the engine? Thank you, and sorry if the question was confusing.

    Read the article

  • What is the proper way to install the 3.4 kernel?

    - by Marcelo Ruiz
    Kernel 3.2 has an annoying error for my wireless card (rtl8192se-b) that makes the connection drop and/or prevents the card to make a connection to the wireless router. Dealing with it was very frustrating until I found out the bug was corrected in 3.4. I downloaded: linux-headers-3.4.0-030400_3.4.0-030400.201205210521_all.deb linux-headers-3.4.0-030400_3.4.0-030400.201205210521_all.deb linux-image-3.4.0-030400-generic_3.4.0-030400.201205210521_amd64.deb and installed with: sudo dpkg -i * Now the wireless works fine, but I have two problems that cannot solve. The first one is minor: plymouth would not start at all. But if I boot with the 3.2 kernel it works fine. The second one is serious: sometimes the computer won't shut down or reboot. The X server terminates but the computer shows part of my grub background and will stay there forever using 100% of the CPU. I have a Toshiba Qosmio with an Core i7 and nvidia graphic card (using nvidia-current). During one shutdown, I briefly read a message that said that the virtualbox module couldn't be unloaded from the kernel. I tried to solve this by removing and purging virtualbox and installing it back. I don't see the message anymore, but sometimes the computer won't shutdown nor reboot. Am I missing something to properly configure the new kernels? Thanks!

    Read the article

  • Xubuntu 14.04 with Compton, strange screen tearing, only when playing videos though (advice needed)

    - by LinuxDudester
    Hello beloved community, Yet again I am in need of your great expertise. I ran into a very strange issue and just can't wrap my mind around it. I'm running Xubuntu 14.04 exclusively, with Compton installed. The OS runs great and I have absolutely no screen tearing when I move my windows around, scroll in my web browser, work in Gimp or Photoshop (wine) or even when I play very graphic demanding games, like Metro Last Light, Euro Truck Driver 2 and so on. There's not a tiny bit of tearing to see, but as soon as I play videos, in xbmc, vlc or parole media player the tearing begins (strangely enough this does not apply to youtube videos). I followed all available workarounds on askubuntu and the ubuntu forum,like the 50-xserver-command.conf, startx /etc/X11/Xsession /usr/bin/xbmc-standalone -- -bs or libsdl1.2debian fix and many more, but to no avail. I also tried the Open Source Nouveau display drivers as well, but for some odd reason they don't work so great on my system or at least with my graphics card. Even with Compton installed and configured, I have an extreme amount of screen tearing, as soon as I switch to the proprietary Nvidia drives the screen tearing is gone completely, except for the video playback with xbmc, vlc or parole media player. System info for your reference: OS: Xubuntu 14.04 Linux-x86_64 - Processor: Intel Core i7-4770S CPU @ 3.10GHz - Ram: 16 GB - GeForce GT 750M 1024 MB - Nvidia Driver: 331.38 Has anyone experienced such an odd issue or do you have any advice on how I could fix this? I would appreciate any help! Have a nice day!

    Read the article

  • Were the first assemblers written in machine code?

    - by The111
    I am reading the book The Elements of Computing Systems: Building a Modern Computer from First Principles, which contains projects encompassing the build of a computer from boolean gates all the way to high level applications (in that order). The current project I'm working on is writing an assembler using a high level language of my choice, to translate from Hack assembly code to Hack machine code (Hack is the name of the hardware platform built in the previous chapters). Although the hardware has all been built in a simulator, I have tried to pretend that I am really constructing each level using only the tools available to me at that point in the real process. That said, it got me thinking. Using a high level language to write my assembler is certainly convenient, but for the very first assembler ever written (i.e. in history), wouldn't it need to be written in machine code, since that's all that existed at the time? And a correlated question... how about today? If a brand new CPU architecture comes out, with a brand new instruction set, and a brand new assembly syntax, how would the assembler be constructed? I'm assuming you could still use an existing high level language to generate binaries for the assembler program, since if you know the syntax of both the assembly and machine languages for your new platform, then the task of writing the assembler is really just a text analysis task and is not inherently related to that platform (i.e. needing to be written in that platform's machine language)... which is the very reason I am able to "cheat" while writing my Hack assembler in 2012, and use some preexisting high level language to help me out.

    Read the article

  • How to reduce the fan noise and how to increase battery life?

    - by mehdi
    I have a brand new Sony Vaio S series laptop.(VPCSA2DGX) It came factory installed with Windows 7 professional Edition 64bit. Runs Intel core i5, 500 GB HDD , 4GB Ram. First I installed ubuntu 11.10 64 bit along side Windows to dual boot. Later,since the problem did not solve, I installed ubuntu 12.04 64bit along side Windows to dual boot. However the problem keeps annoying me. Problem: When running ubuntu 11.10/12.04, the battery lasts only about 1.5 hours. The Fan runs loud and continuously. And there is a lot of heat generated. System monitor shows less than 5% CPU used. My laptop enjoys hybrid graphic and I tried turning off ADM graphic card and keep Intel graphic card on. However I can not get the Fan noise or heat to go away and consequently the battery drain continues. BTW, in windows, the laptop gives 4-5 hours of battery power, Fan is silent and there is no heat problem. Any ideas on how to reduce the fan noise and how to increase battery life in ubuntu 11.10/12.04?

    Read the article

  • Diablo 3 "freezes" periodically

    - by Shauna
    I'm running Diablo 3 (start edition, digital download) on the following: Ubuntu 12.04 64-bit (stock Unity, Gnome, etc; kernel version 3.2.0-29-generic) Wine version 1.5.11 (base, from Wine's PPA, game started with setarch i386 -3) Intel i7 920 CPU nVidia GTX260 with driver version 295.49 ("post-release updates" entry within the proprietary drivers tool), dual monitors 6GB RAM Every so often (and what appears to be at random), the game video will freeze up. I can still move the mouse, and it reacts to ctrl+alt+f2 to drop into text mode, but I can't get back to the desktop (which means I can check terminal to see what's going on after launching from terminal, especially since even in windowed mode, the secondary screen gets shut off by the game), and I can't continue to play the game. In order to get it running again, I have to restart lightdm, then relaunch the game (or, in a couple of rare cases, I had to restart the computer entirely, because running sudo service lightdmn [stop/start] doesn't appear to react). Turning down the video settings seems to have helped in some cases, but not all of them. Times it's frozen on me: The beginning of The Fallen Start quest part 6 - kill the Wretched Mother, right as you walk out of New Tristram and engage in the monsters on the northern path (repeatedly froze here until I adjusted the graphics down) Within the cinematic/event upon finding Deckard Cain While fighting the skeletons to protect Deckard Cain When about to enter Leoric's passage after Cain sends you back to where you found him That's as far as I've gotten through the game so far. Additionally, this doesn't happen on other games I play and seems to only occur with Diablo 3. Has anyone else run into this issue and know a possible cause or fix, or at least know where I can look (and what to look for) to figure out why this is happening?

    Read the article

  • Upgrade to 12.04 results to empty Dash, no date & time either on the top panel

    - by Nicolas
    I've upgraded from Ubuntu netbook remix something to 12.04 LTS, and I've got two issues. (Got an Asus eeePc 32bits, Intel 945GME x86/MMX/SSE2 and Intel Atom CPU N270 @ 1.6Ghz x2) Nothing in the Dash. Only the "home" tab, other tabs are missing. No search results whatsoever. Missing elements in the system panel, privacy and date & time. No date & time on the right corner either. I've tried to reset unity with the terminal but the process was a whole mess full of errors. It did show date & time in the system panel (not on the top-right corner) while the process was going on in the terminal. But then it was such a mess (no more icons on the right corner amongst other things), and the process wouldn't complete, so I had to reboot the computer and get Unity as before, still no date & time and privacy.

    Read the article

  • MaaS minimum requirements with juju-jitsu?

    - by Christopher Shen Mu Long
    I've browsed through so many different sites and found so much contradictory information. As I am getting tired of this and do belive this question affects many other users, so I would like to collect the "once and for all times" answer. Unfortunately, the documentation on MaaS and Juju is ... well, not the best, sorry to say that. What are the minimum system requirements for setting up a MaaS cluster, which is going to be orchestrated with juju-jitsu? Do they need to have the exact system specifications or can I just combine different hardware? What are the minimum requirements for the master machine? E.g. "You need at least 8GB of RAM, a dual core CPU with at least 3.0 GHz." How many machines to I need to deploy MaaS on? I've read six machines, nine machines, and so on. I clearly want to know: "You need one for the Master and e.g. five nodes." Do I need to attach as many NICs (network interface cards) to my master machine as there are nodes, or can I simply attach two NICs and a switch? One NIC for connecting to the internet, one for handling the MaaS tasks, connected to a switch, which connects my nodes to the master? Is Juju now ready for local deployment? The last time I experimented with juju and had to reboot my machine, the services orchestrated by juju were gone. This was an issue I also found on the official juju site. Unfortunately, as mentioned above, the documentation is not the best, so I could not find the necessary info on that again. So: Can I use juju on a local environment or will a reboot break my setup?

    Read the article

  • At the time of installing ubuntu, i am getting dark black screen only

    - by faruque
    I am trying to install Ubuntu 12.04 LTS dual boot with Windows 7, but when i click on Try or even Install ubuntu, i am getting black screen only. I can't see any text or anything else. When i see my Laptop's screen from close look, ubuntu in the middle of screen shown but screen is dark black. So because of this i am unable to install Ubuntu on my laptop. Please help in this regard. Following deatails of my laptop. Details of my Laptop: Manufacturer- Acer Aspire 4736 Processor- Intel core 2 duo CPU T660 Graphics driver- Mobile Intel(R) 4 series express chipset family (Microsoft corporation - WDDM 1.1), Current version installed- 8.15.10.2302 In ubuntu 11.04 i know how to boot into nomodeset, but i don't know how to boot through nomodeset in ubuntu 12.04 LTS. Because there is no option shown for F6 key. My laptop is Acer aspire 4736, and my Video/Graphics card shows unknown by ubuntu. Please someone help me. Can changing or upgrading my laptop's graphic card solve this problem..?? If yes then, which graphic card should i go for which is supported by Ubuntu and other Linux distros? Please someone help.

    Read the article

  • Why don't %MEM values add up to mem in top?

    - by ben
    I'm currently debugging performance issues with my VPS and for that I'm trying to understand which of the processes eat the most memory. Reading top, here's what I get: Mem: 366544k total, 321396k used, 45148k free, 380k buffers Swap: 1048572k total, 592388k used, 456184k free, 7756k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 12339 ruby 20 0 844m 74m 2440 S 0 20.8 0:24.84 ruby 12363 ruby 20 0 844m 73m 1576 S 0 20.6 0:00.26 ruby 21117 ruby 20 0 171m 33m 1792 S 0 9.3 2:03.98 ruby 11846 ruby 20 0 858m 21m 1820 S 0 6.0 0:09.15 ruby 21277 ruby 20 0 219m 11m 1648 S 0 3.2 2:00.98 ruby 792 root 20 0 266m 10m 1024 S 0 3.0 1:40.06 ruby 532 mysql 20 0 234m 4760 1040 S 0 1.3 0:41.58 mysqld 793 root 20 0 250m 4616 984 S 0 1.3 1:20.55 ruby 586 root 20 0 156m 4532 848 S 0 1.2 6:17.10 god 12315 ruby 20 0 175m 2412 1900 S 0 0.7 0:07.55 ruby 3844 root 20 0 44036 2132 1028 S 0 0.6 1:08.22 ruby 10939 ruby 20 0 179m 1884 1724 S 0 0.5 0:08.33 ruby 4660 ruby 20 0 229m 1592 1440 S 0 0.4 2:55.46 ruby 3879 nobody 20 0 37428 964 520 S 0 0.3 0:01.99 nginx As you can see my memory is about 90% used (which is my issue) but when you add up the %MEM values, it goes to about 50-60% only. Same thing, RES doesn't add up to ~350mb. Why? Am I misunderstanding their meaning? Thanks

    Read the article

  • Processing component pools problem - Entity Subsystem

    - by mani3xis
    Architecture description I'm creating (designing) an entity system and I ran into many problems. I'm trying to keep it Data-Oriented and efficient as much as possible. My components are POD structures (array of bytes to be precise) allocated in homogeneous pools. Each pool has a ComponentDescriptor - it just contains component name, field types and field names. Entity is just a pointer to array of components (where address acts like an entity ID). EntityPrototype contains entity name and array of component names. Finally Subsystem (System or Processor) which works on component pools. Actual problem The problem is that some components dependents on others (Model, Sprite, PhysicalBody, Animation depends on Transform component) which makes a lot of problems when it comes to processing them. For example, lets define some entities using [S]prite, [P]hysicalBody and [H]ealth: Tank: Transform, Sprite, PhysicalBody BgTree: Transform, Sprite House: Transform, Sprite, Health and create 4 Tanks, 5 BgTrees and 2 Houses and my pools will look like: TTTTTTTTTTT // Transform pool SSSSSSSSSSS // Sprite pool PPPP // PhysicalBody pool HH // Health component There is no way to process them using indices. I spend 3 days working on it and I still don't have any ideas. In previous designs TransformComponent was bound to the entity - but it wasn't a good idea. Can you give me some advices how to process them? Or maybe I should change the overall design? Maybe I should create pools of entites (pools of component pools) - but I guess it will be a nightmare for CPU caches. Thanks

    Read the article

  • Visual Studio 2010 editor painfully slow

    - by Daniel Gehriger
    I'm running out of patience with MS VisualStudio 2010: I'm working on a solution containing ~50 C++ projects. When using the editor, I experience a lag of 1 - 2 seconds whenever I move the cursor to a different line, or when I move to a different window, or generally when the editor losses and gains focus. I went through a whole series of optimizations, to no avail: installed all hotfixes for VS2010 disabled all add-ins and extensions disabled Intellisense deleted all temporary files created by VS2010 disabled hardware acceleration unloaded all but 15 projects disabled tracking changes closed all but one window and so on. This is on a Dual Core machine with SSD harddrive (verified throughput 100MB/s), enough free space on HD, Windows 7 Pro 32-bit with 3GB of RAM and most of it still free. Whenever I type a letter, CPU usage of devenv.exe goes to 50 - 90% in process monitor for 1 - 2 seconds before returning to 5%. I used Process Explorer to analyze registry and file system access, and I only notice frequent accesses to the .sln file (which is quiet small), and a few registry reads, but nothing that would raise a red flag. I don't have this problem with solutions containing less projects, so I'm inclined to think that it's related to the number of projects. For your information, the entire solution has been migrated over the years from VS2005 to VS2008 to now VS2010. Does anyone have any ideas what else I could do to resume work on this project, other than returning to VS2008?

    Read the article

  • django & postgres linux hosting (with SSH access) recommendations

    - by Justin Grant
    We're looking for a good place to host our custom Django app (a fork of OSQA) and its postgresql backend. Requirements include: Linux Python 2.6 or (ideally) Python 2.7 Django 1.2 Postgres 8.4 or later DB backup/restore handled by the hoster, not us OS & dev-platform-stack patching/maintenance handled by the hoster, not us SSH access (so we can pull source code from GitHub, so we can install python eggs, etc.) ability to set up cron jobs (e.g. to send out dail email updates) ability to send up to 10K emails/day good performance (not ganged up with a zillion other sites on one CPU, not starved for RAM) FTP or SCP access to web logs dedicated public IP SSL support Costs under $1000/month for a relatively small site (<5M pageviews/month) Good customer service We already have a prototype site running on EC2 on top of a Bitnami DjangoStack. The problem is that we have to patch the OS, patch postgres, etc. We'd really prefer a platform-as-a-service (PaaS) offering, like Heroku offers for Rails apps, where all we need to worry about is deploying our code instead of worrying about system software patching and maintenance. Google App Engine is closest to what we're looking for, but they don't offer relational DB access (not yet at least). Anyone have a recommendation?

    Read the article

  • Enterprise Manager Grid Control licencelése

    - by Lajos Sárecz
    Gyakran kapok kérdéseket az Oracle Enterprise Manager Grid Control licencelésével kapcsolatban, ezért az alábbiakban igyekszem összefoglalni a legfontosabb információkat. Az alábbi ismerteto nem teljes köru, mivel számos olyan termék van (Data Masking, Real Application Testing, Real User Experience Insight, Application Testing Suite), melyek kapcsolódnak az Enterprise Manager-hez, azonban licencelésük másképp muködik. Az Enterprise Manager licenceléssel kapcsolatban az elsodleges információ forrás a Licensing Information doksi. A legfontosabb információk: - A Grid Control keretrendszer (Agent-ek és a konzol az alapfunkciókkal - lásd késobb) önmagában ingyenes, sot restricted-use licencet tartalmaz Oracle Database-re, amennyiben azt csak az Oracle Management Repository céljára használják. Fontos, hogy ez nem tartalmaz egyéb Oracle Database opciókat, mint például a RAC! Hasonlóképpen az Oracle WebLogic Server is kizárólagosan az Oracle Management Server kiszolgálására használható ingyenesen, de fürtözés nélkül. - A Grid Control alapfunkcionalitása: Discovery, Groups, Job Scheduling, Real time availability, Performance & monitoring, Target Home Pages, Administration, Console alerts - Az alapfunkcionalitás felügyelt termékektol függoen bovítheto Management Pack, Plug-in és Connector termékekkel. Alapvetoen ezek licencelése mindig a monitorozott, felügyelt termék licenceléséhez kell, hogy igazodjon. Tehát például ha 2 adatbázis szerverre szeretnénk Diagnostic Pack-ek használni, akkor mindkettore kell CPU vagy NUP (Named User Plus) licencet vásárolni, attól függoen az adatbázis maga milyen licenccel rendelkezik. Megjegyzem ezt a konkrét Management Pack-ek kizárólag Enterprise Edition Database esetén lehet alkalmazni. - Számos fizetos funkció külön telepítés nélkül is elérheto a Grid Control felületén (ugyanez igaz Database Control-ra és Fusion Middleware Control-ra is). Hogy elkerüljük a licenc sértést, érdemes ellenorízni hogy az adott környezetben mely Management Pack-ek használata került bekapcsolásra. Ezt a Grid Control Setup menüjében a Management Pack Access almenüben tehetjük meg legegyszerubben. Részleteseb leírás itt található. Database Diagnostic és Tuning Pack adatbázis szintu kikapcsolására is lehetoség van, hogy parancssorból se lehessen használni oket, errol korábban már írtam. Az egyes management termékek USD ára megtalálható az árlistában. Ha valami fontos kimaradt, várom a kérdéseket, hozzászólásokat, és igény szerint bovítem a fentieket.

    Read the article

  • Demantra 7.3.1.3 Controlling MDP_MATRIX Combinations Assigned to Forecasting Tasks Using TargetTaskSize

    - by user702295
    New 7.3.1.3 parameter: TargetTaskSize Old parameter: BranchID  Multiple, deprecated  7.3.1.3 onwards Parameter Location: Parameters > System Parameters > Engine > Proport   Default: 0   Engine Mode: Both   Details: Specifies how many MDP_MATRIX combinations the analytical engine attempts to assign to each forecasting task.  Allocation will be affected by forecsat tree branch size.  TaskTargetSize is automcatically calculated.  It holds the perferred branch size, in number of combinations in the lowest level. This parameter is adjusted to a lower value for smaller schemas, depending on the number of available engines.   - As the forecast is generated the engine goes up the tree using max_fore_level and not top_level -1.  Max_fore_level has     to be less than or equal to top_level -1.  Due to this requirement, combinations falling under the same top level -1     member must be in the same task.  A member of the top level -1 of the forecast tree is known as a branch.  An engine     task is therefore comprised of one or more branches.     - Reveal current task size       go to Engine Administrator --> View --> Branch Information and run the application on your Demantra schema.  This will be deprecated in 7.3.1.3 since there is no longer a means of adjusting the brach size directly.  The focus is now on proper hierarchy / forecast design.     - Control of tasks       The number of tasks created is the lowest of number of branches, as defined by top level -1 members in forecast       tree, and engine sessions and the value of TargetTaskSize.  You are used to using the branch multiplier in this       calculation.  As of 7.3.1.3, the branch ID multiple is deprecated.     - Discovery of current branch size       To resolve this you must review the 2nd highest level in the forecast tree (below highest/highest) as this is the       level which determines the size of the branches.  If a few resulting tasks are too large it is recommended that       the forecast tree level driving branches be revised or at times completely removed from the forecast tree.     - Control of foreacast tree branch size         - Run the following sql to determine how even the branches are being split by the engine:             select count(*),branch_id from mdp_matrix where prediction_status = 1 and do_fore = 1 group by branch_id;             This will give you an understanding if some of the individual branches have an unusually large number of           rows and thus might indicate that the engine is not efficiently dividing up the parallel tasks.         - Based on the results of this sql, we may want to adjust the branch id multiplier and/or the number of engines           (both of these settings are found in the Engine Administrator)           select count(*), level_id from mdp_matrix where prediction_status = 1 and do_fore = 1 group by level_id;           This will give us an understanding at which level of the Forecast tree where the forecast is being generated.            Having a majority of combinations higher on the forecast tree might indicate either a poorly designed forecast           tree and/or engine parameters that are too strict           Based on the results of this we would adjust the Forecast Tree to see if choosing a different hierarchy might           produce a forecast, with more combinations, at a lower level.           For example:             - Review the 2nd highest level in the forecast tree, below highest/highest, as this is the level which               determines the size of the branches.             - If a few resulting tasks are too large it is recommended that the forecast tree level driving branches               be revised or at times completely removed from the forecast tree.               - For example, if the highest level of the forecast tree is set to Brand/All Locations.             - You have 10 brands but 2 of the brands account for 67% and 29% of all combinations.             - There is a distinct possibility that the tasks resulting from these 2 branches will be too large for               a single engine to process.  Some possible solutions could be to remove the Brand level and instead               use a different product grouping which has a more even distribution, possibly Product Group.               - It is also possible to add a location dimension to this forecast tree level, for example Customer.                This will also reduce forecast tree branch size and will deliver a balanced task allocation.             - A correctly configured Forecast Tree is something that is done by the Implementation team and is               not the responsibility of Oracle Support.  Allocation will be affected by forecast tree branch size.  When TargetTaskSize is set to 0, the default value, the system automatically calculates a value for 'TargetTaskSize' depending on the number of engines.   - QUESTION:  Does this mean that if TargetTaskSize is 1, we use tree branch size to allocate branches to tasks instead                of automatically calculating the size?     ANSWER: DEV Strongly recommends that the setting of TargetTaskSize remain at the DEFAULT of ZERO (0).   - How to control the number of engines?     Determine how many CPUs are on the machine(s) that is (are) running the engine.  As mentioned earlier, the general     rule is that you should designate 2 engines per each CPU that is available.  So for example, if you are running the     engine on a machine that has 4 CPU then you can have up to 8 engines designated in the Engine Administrator.  In this     type of architecture then instead of having one 'localhost' in your Engine Settings Screen, you would have 'localhost'     repeated eight times in this field.     Where do I set the number of engines?                 To add multiples computers where engine will run, please do a back-up of Settings.xml file under         Analytical Engines\bin\ folder, then edit it and add there the selected machines.                 Example, this will allow 3 engines to start:         - <Entry>           <Key argument="ComputerNames" />           <Value type="string" argument="localhost,localhost,localhost" />           </Entry Otherwise, if there are no additional engines defined, the calculated value of 'TargetTaskSize' is used. (Oracle does not recommend changing the default value.) The TargetTaskSize holds the engines prefered branch size, in number of level 1 combinations.   - Level 1 combinations, known as group size The engine manager will use this parameter to attempt creating branches with similar size.   * The engine manager will not create engines that do not have a branch. The engine divider algorithm uses the value of 'TargetTaskSize' as a system-preferred branch size to create branches that are more equal in size which improves engine performance.  The engine divider will try to add as many tasks as possible to an existing branch, up to the limit of 'TargetTaskSize' level 1 combinations, before adding new branches. Coming up next: - The engine divider - Group size - Level 1 combinations - MAX_FORE_LEVEL - Engine Parameters  

    Read the article

  • Reproducible freezes with on an AMD fusion (e350) sony vaio

    - by doycho
    So a week ago I bought it and I've been struggling to make the Ubuntu which I installed stable. There's one thing that makes my life miserable, though. There's this easily reproducible freeze when I start some kind of video. So here is what happens: Everything works fine for some time I start vlc/mplayer/flashplayer/totem with something to watch In few minutes time I lose the sound (nothing in the logs at this point) At that time the video app instantly allocates all the memory and its CPU usage skyrockets. Total freeze. I can move the cursor around for few seconds and sometimes even switch to another app. But ultimately there comes the time I can't do anything - can't kill X with ctrl+alt+backspace (I have it enabled), can't switch to any other console (ctrl+alt+f1-6), can't connect to the machine via ssh. The only way to restart it is the ctrl+alt+SysRq+UABI magic :) What discourages me most is the fact I can't see anything in the logs. The only error I've noticed is Jun 19 17:00:37 serenity kernel: [ 1506.350676] software-center[17581]: segfault at 30 ip 00007fd3631b814c sp 00007fff18a6fa10 error 4 in libgtk-x11-2.0.so.0.2400.4[7fd362f7d000+436000]. I've been searching through the Xorg log, kernel logs, syslog. If you have any idea how I can get more debug info I'll be glad to try them. Things I've tried: Changing drivers - the open source one, the proprietary driver xorg-edgers' ppa - https://launchpad.net/~xorg-edgers/+archive/ppa changing to the last stable kernel (2.6.39) Some notes: It my be irrelevant but the sound is constantly stuttering. This probably is a separate issue though I've found that if I start more video/sound apps the freeze happens faster.

    Read the article

  • Using Google App Engine to Perform World Updates vs an Authoritative Server

    - by Error 454
    I am considering different game server architectures that use GAE. The types of games I am considering are turn-based where the world status would need to be updated about once per minute. I am looking for an answer that persuades me to either perform the world update on the google servers OR an authoritative server that syncs with the datastore. The main goal here would be to minimize GAE daily quotas. For some rough numbers, I am assuming 10,000 entities requiring updates. Each entity update would require: Reading 5 private entity variables (fetched from datastore) Fetching as many as 20 static variables (from datastore or persisted in server memory) Writing 5 entity variables Clients of the game would authenticate and set state directly against GAE as well as pull the latest world state from GAE. Running the update on GAE would consist of a cron job launched every minute. This would update all of the entities and save the results to the datastore. This would be more CPU intensive for GAE. Running the update on an authoritative server would consist of fetching entity data from the GAE datastore, calculating the new entity states and pushing the new state variables back to the datastore. This would be more bandwidth intensive for the datastore.

    Read the article

  • How can I set my resolution to 1280x1024 on an Acer Aspire Revo 3700?

    - by torbengb
    I've just set up a new nettop computer (Acer Aspire Revo 3700: CPU:Atom D525, GPU:Nvidia ION2). I've just made a clean install of Ubuntu 10.10 using the standard USB pendrive method. Almost everything works OK, but the graphics are not OK: the recommended Nvidia driver is activated but the monitor is not detected, so the resolution is wrong. How can I make Ubuntu detect my monitor? How can I get the proper resolution (1280x1024) in Ubuntu? I know that my monitor is not a CRT but an LCD: it's a BenQ, model T905, with 1280x1024 resolution at 60Hz, connected via a normal VGA cable. DVI or HDMI is not an option. When I go to SystemPrefsMonitors, I get: It appears that your graphics driver does not support the necessary extensions to use this tool. Do you want to use your graphics driver vendor's tool instead? YES NO If I say NO then I get a window: or for YES I get this: In both cases I don't see that I can fix this problem. The main reason for getting this new computer was that I was sick of having graphics problems on the old one with a very ugly solution that didn't give me hardware support - but at least I got the resultion. Why is this so difficult... sigh!

    Read the article

  • ClearTrace Supports Statement Level Events

    - by Bill Graziano
    One of the requests I get on a regular basis is to capture the performance of statement level events.  The latest beta has this feature available.  If you’re interested in this I’d like to get some feedback. I handle the SP:StmtCompleted and the SQL:StmtCompleted events.  These report CPU, reads, writes and duration. I’m not in any way saying it’s a good idea to trace these events.  Use with caution as this can make your traces much larger. If there are statement level events in the trace file they will be processed.  However the query screen displays batch level *OR* statement level events.  If it did both we’d be double counting. I don’t have very many traces with statement completed events in them.  That means I only did limited testing of how it parses these events.  It seems to work well so far though.  Your feedback is appreciated. If you ever write loops or cursors in stored procedures you’re going to get huge trace files.  Be warned. I also fixed an annoying bug where ClearTrace would fail and tell you a value had already been added.  This is a result of the collection I use being case-sensitive and SQL Server not being case-sensitive.  I thought I had properly coded around that but finally realized I hadn’t.  It should be fixed now. If you have any questions or problems the ClearTrace support forum is the best place for those.

    Read the article

  • Visual Studio 2010 editor painfully slow [closed]

    - by Daniel Gehriger
    I'm running out of patience with MS VisualStudio 2010: I'm working on a solution containing ~50 C++ projects. When using the editor, I experience a lag of 1 - 2 seconds whenever I move the cursor to a different line, or when I move to a different window, or generally when the editor losses and gains focus. I went through a whole series of optimizations, to no avail: installed all hotfixes for VS2010 disabled all add-ins and extensions disabled Intellisense deleted all temporary files created by VS2010 disabled hardware acceleration unloaded all but 15 projects disabled tracking changes closed all but one window and so on. This is on a Dual Core machine with SSD harddrive (verified throughput 100MB/s), enough free space on HD, Windows 7 Pro 32-bit with 3GB of RAM and most of it still free. Whenever I type a letter, CPU usage of devenv.exe goes to 50 - 90% in process monitor for 1 - 2 seconds before returning to 5%. I used Process Explorer to analyze registry and file system access, and I only notice frequent accesses to the .sln file (which is quiet small), and a few registry reads, but nothing that would raise a red flag. I don't have this problem with solutions containing less projects, so I'm inclined to think that it's related to the number of projects. For your information, the entire solution has been migrated over the years from VS2005 to VS2008 to now VS2010. Does anyone have any ideas what else I could do to resume work on this project, other than returning to VS2008?

    Read the article

  • Seeking solution for printing-reporting .NET

    - by Parhs
    I am developing an application that prints in separate threads in extreme cases about 20-25 pages per minute to various thermal printers. Currently templates for these are XAML xps documents. All printers have graphics drivers that support EMF/GDI printing. So GDI-EMF is done by operating system resulting in slower performance. Sending raw text for printing is another good solution but doesnt work always , because some clients have old chinese thermal printer that nobody support thus impossible to change codepage / emulation. So it doesnt work always. Also most computers running my software are low end ATOM CPU. So I am thinking to return to GDI, EMF printing and have both Text-Only reports and EMF reports. Another reason i want EMF is because here receipts are signed by Electronic Fiscal Memory device.Most of these dont do good job extracting text from XPS as they dont follow the standard but how windows convert GDI to XPS.Even with text-only mode some of them dont support all character encodings and are impossible to send paper cut command after the sign. I know that using a reporting engine would solve rendering problem but I dont want to buy one. All I want is to be able to show tabular data and insert an image and replaced text.I know there is StringTemplate that could do the generation of template but the problem is i should parse somehow the template and render it using GDI commands. Is there any other solution/approach for this ? Or is there anything ready ?

    Read the article

  • yield – Just yet another sexy c# keyword?

    - by George Mamaladze
    yield (see NSDN c# reference) operator came I guess with .NET 2.0 and I my feeling is that it’s not as wide used as it could (or should) be.   I am not going to talk here about necessarity and advantages of using iterator pattern when accessing custom sequences (just google it).   Let’s look at it from the clean code point of view. Let's see if it really helps us to keep our code understandable, reusable and testable.   Let’s say we want to iterate a tree and do something with it’s nodes, for instance calculate a sum of their values. So the most elegant way would be to build a recursive method performing a classic depth traversal returning the sum.           private int CalculateTreeSum(Node top)         {             int sumOfChildNodes = 0;             foreach (Node childNode in top.ChildNodes)             {                 sumOfChildNodes += CalculateTreeSum(childNode);             }             return top.Value + sumOfChildNodes;         }     “Do One Thing” Nevertheless it violates one of the most important rules “Do One Thing”. Our  method CalculateTreeSum does two things at the same time. It travels inside the tree and performs some computation – in this case calculates sum. Doing two things in one method is definitely a bad thing because of several reasons: ·          Understandability: Readability / refactoring ·          Reuseability: when overriding - no chance to override computation without copying iteration code and vice versa. ·          Testability: you are not able to test computation without constructing the tree and you are not able to test correctness of tree iteration.   I want to spend some more words on this last issue. How do you test the method CalculateTreeSum when it contains two in one: computation & iteration? The only chance is to construct a test tree and assert the result of the method call, in our case the sum against our expectation. And if the test fails you do not know wether was the computation algorithm wrong or was that the iteration? At the end to top it all off I tell you: according to Murphy’s Law the iteration will have a bug as well as the calculation. Both bugs in a combination will cause the sum to be accidentally exactly the same you expect and the test will PASS. J   Ok let’s use yield! That’s why it is generally a very good idea not to mix but isolate “things”. Ok let’s use yield!           private int CalculateTreeSumClean(Node top)         {             IEnumerable<Node> treeNodes = GetTreeNodes(top);             return CalculateSum(treeNodes);         }             private int CalculateSum(IEnumerable<Node> nodes)         {             int sumOfNodes = 0;             foreach (Node node in nodes)             {                 sumOfNodes += node.Value;             }             return sumOfNodes;         }           private IEnumerable<Node> GetTreeNodes(Node top)         {             yield return top;             foreach (Node childNode in top.ChildNodes)             {                 foreach (Node currentNode in GetTreeNodes(childNode))                 {                     yield return currentNode;                 }             }         }   Two methods does not know anything about each other. One contains calculation logic another jut the iteration logic. You can relpace the tree iteration algorithm from depth traversal to breath trevaersal or use stack or visitor pattern instead of recursion. This will not influence your calculation logic. And vice versa you can relace the sum with product or do whatever you want with node values, the calculateion algorithm is not aware of beeng working on some tree or graph.  How about not using yield? Now let’s ask the question – what if we do not have yield operator? The brief look at the generated code gives us an answer. The compiler generates a 150 lines long class to implement the iteration logic.       [CompilerGenerated]     private sealed class <GetTreeNodes>d__0 : IEnumerable<Node>, IEnumerable, IEnumerator<Node>, IEnumerator, IDisposable     {         ...        150 Lines of generated code        ...     }   Often we compromise code readability, cleanness, testability, etc. – to reduce number of classes, code lines, keystrokes and mouse clicks. This is the human nature - we are lazy. Knowing and using such a sexy construct like yield, allows us to be lazy, write very few lines of code and at the same time stay clean and do one thing in a method. That's why I generally welcome using staff like that.   Note: The above used recursive depth traversal algorithm is possibly the compact one but not the best one from the performance and memory utilization point of view. It was taken to emphasize on other primary aspects of this post.

    Read the article

  • Oracle VM server for SPARC 2.2 on S11

    - by Liam Merwick
    Oracle VM Server for SPARC 2.2 has been released for a little while now. The https://blogs.oracle.com/virtualization blog has an overview of all the 2.2 features. Initially, what was released was the SVR4 package for Solaris 10 (which is unbundled and wasn't constrained by any external schedule). On Solaris 11, the 'ldomsmanager' package is built into Solaris (and therefore doesn't need to be downloaded separately) so it is delivered as part of an S11 Support Repository Update (SRU). Some of the features in 2.2 are specific to S11 (SR-IOV and the ability to live migrate between machines with different CPU types) and so there have been many requests to know when are the S11 bits coming. Solaris 11 SRU8.5 was released on Friday and this includes Oracle VM server for SPARC 2.2 so if you're already running an S11 SRU all you need do is a 'pkg update' to get the 2.2 bits. If you're still running the original S11 and your 'pkg publisher' output shows the /release repository then you'll need to sign up for the /support repo by getting the appropriate keys and certificates to access the repository (requires a support contract). The 2.2 Admin Guide documents how to do this upgrade on S11 Two S11 articles which have some useful details on upgrading (not just 'ldomsmanager') via the support repositories are: How to Update Oracle Solaris 11 Systems From Oracle Support Repositories by Glynn Foster Tips for Updating Your Oracle Solaris 11 System from the Oracle Support Repository by Peter Dennis In particular, if you'd like to stick with the v2.1 release when upgrading to SRU8.5 or greater, see the 'pkg freeze' section of Peter's article.

    Read the article

  • GLSL billboard move center of rotation

    - by Jacob Kofoed
    I have successfully set up a billboard shader that works, it can take in a quad and rotate it so it always points toward the screen. I am using this vertex-shader: void main(){ vec4 tmpPos = (MVP * bufferMatrix * vec4(0.0, 0.0, 0.0, 1.0)) + (MV * vec4( vertexPosition.x * 1.0 * bufferMatrix[0][0], vertexPosition.y * 1.0 * bufferMatrix[1][1], vertexPosition.z * 1.0 * bufferMatrix[2][2], 0.0) ); UV = UVOffset + vertexUV * UVScale; gl_Position = tmpPos; BufferMatrix is the model-matrix, it is an attribute to support Instance-drawing. The problem is best explained through pictures: This is the start position of the camera: And this is the position, looking in from 45 degree to the right: Obviously, as each character is it's own quad, the shader rotates each one around their own center towards the camera. What I in fact want is for them to rotate around a shared center, how would I do this? What I have been trying to do this far is: mat4 translation = mat4(1.0); translation = glm::translate(translation, vec3(pos)*1.f * 2.f); translation = glm::scale(translation, vec3(scale, 1.f)); translation = glm::translate(translation, vec3(anchorPoint - pos) / vec3(scale, 1.f)); Where the translation is the bufferMatrix sent to the shader. What I am trying to do is offset the center, but this might not be possible with a single matrix..? I am interested in a solution that doesn't require CPU calculations each frame, but rather set it up once and then let the shader do the billboard rotation. I realize there's many different solutions, like merging all the quads together, but I would first like to know if the approach with offsetting the center is possible. If it all seems a bit confusing, it's because I'm a little confused myself.

    Read the article

< Previous Page | 158 159 160 161 162 163 164 165 166 167 168 169  | Next Page >