Search Results

Search found 22993 results on 920 pages for 'global load balancing'.

Page 552/920 | < Previous Page | 548 549 550 551 552 553 554 555 556 557 558 559  | Next Page >

  • Reading the *.sqlite files stored in the firefox profile.

    - by celil
    How would one go about reading the data in the *.sqlite files in one's profile? Trying to read them with sqlite3 was unsuccessful. $ sqlite3 SQLite version 3.7.4 Enter ".help" for instructions Enter SQL statements terminated with a ";" sqlite> .load ./downloads.sqlite Error: dlopen(./downloads.sqlite, 10): no suitable image found. Did find: ./downloads.sqlite: unknown file type, first eight bytes: 0x53 0x51 0x4C 0x69 0x74 0x65 0x20 0x66 sqlite> .exit Is this due to Firefox using an older version of sqlite? I am using Firefox v3.6.13

    Read the article

  • Animating DOM elements vs refreshing a single Canvas

    - by mgibsonbr
    A few years ago, when the HTML Canvas element was still kinda fresh, I wrote a small game in a rather "unusual" way: each game element had its own canvas, and frequently animated elements even had multiple canvases, one for each animation sprite. This way, the translation would be done by manipulating the DOM position of the canvases, while the sprite animation would consist of altering the visibility of the already drawn canvases. (z-indexes, of course, were the tricky part) It worked like a charm: even in IE6 with excanvas it showed a decent performance, and everything was rather consistent between browsers, including some smartphones. Now I'm thinking in writing a larger game engine in the same fashion, so I'm wondering whether it would be a good idea to do so in the current context (with all the advances in browsers and so on). I know I'm trading memory for time, so this needs to be customizable (even at runtime) for each machine the game will be running. But I believe using separate canvases would also help to avoid the game "freezing" on CPU spikes, since the translation would still happen even if the redraws lag for a while. Besides, the browsers' rendering engines are already optimized in may ways, so I'm guessing this scheme would also reduce the load on the CPU (in contrast to doing everything in JavaScript - specially the less optimized ones). It looks good in my head, but I'd like to hear the opinion of more experienced people before proceeding further. Is there any known drawback of doing this? I'm particulartly unexperienced in dealing with the GPU, so I wonder whether this "trick" would nullify any benefit of using a single, big canvas. Or maybe on modern devices it's overkill (though I'm skeptic about the claims that canvas+js - especially WebGL - will ever be a good alternative to native code). Any thoughts?

    Read the article

  • Running Python scripts in a browser

    - by sunwukung
    I want to start learning Python - and I'm having trouble getting scripts to load up in a browser (using Wamp). So far I've tried the following: 1: add the following lines to httpd.conf: AddHandler cgi-script .py Options ExecCGI I navigate to localhost/path/to/script/myscript.py but get an Internal Server error. 2: downloaded mod_wsgi-win32-ap22py26-3.0.so - renamed to mod_wsgi (running Wamp with Apache 2.2) added the following lines to httpd.conf AddHandler mod_wsgi .py WSGIScriptAlias /wsgi/ "path/to/my/pythonscripts/folder/" but when I navigate to the script - it renders the script in it's entirety i.e. #!c:/Python26/python.exe -u print "hello world" I managed to get CherryPy working, but ideally I want to learn the language in a relatively raw context before digging into a framework. Can anyone give me some pointers?

    Read the article

  • How to rebuild a Li Ion laptop battery?

    - by spoulson
    I have an aging Gateway NX560XL laptop. The battery is toast and a new one, even aftermarket, starts at $130. So, to experiment, I began tearing apart the old battery to see what can be done. I found it used 8 standard size 18650 Li Ion cells arranged two cells parallel then in series (like: ====). Some online shopping revealed ~$7-13/ea replacements depending on mAh output. My plan is to load test to determine the bad cells and replace only those, as I read that typically only 1 or 2 may be bad. I'm proficient with soldering, however these cells are attached with welded tabs. Some of them broke during disassembly and I'm not sure how to reattach them. What I found online are cells like these that have solder tabs pre-welded to the ends so I can solder wires onto. Is there any guide available that provides the instructions and parts to do this kind of rebuild?

    Read the article

  • Convert old AVI files to a modern format

    - by iWerner
    Hi, we have a collection of old home videos that were saved in AVI format a long time ago. I want to convert these files to a more modern format because the Totem Movie Player that comes with Ubuntu 10.4 seems to be the only program capable of playing them. The files seem to be encoded with a MJPEG codec, and playing them in VLC or Windows Media Player plays only the sound but there is no video. Avidemux was able to open the files, but the quality of the video is severely degraded: The video skips frames and is interlaced (it's not interlaced when playing it in Totem). Neither ffmpeg nor mencoder seems to be able to read the video stream. mencoder reports that it is using ffmpeg's codec. Here's a section from its output: ========================================================================== Opening video decoder: [ffmpeg] FFmpeg's libavcodec codec family [mjpeg @ 0x92a7260]mjpeg: using external huffman table [mjpeg @ 0x92a7260]mjpeg: error using external huffman table, switching back to internal Unsupported PixelFormat -1 Selected video codec: [ffmjpeg] vfm: ffmpeg (FFmpeg MJPEG) while running ffmpeg produces the following: $ ffmpeg -i input.avi output.avi FFmpeg version SVN-r0.5.1-4:0.5.1-1ubuntu1, Copyright (c) 2000-2009 Fabrice Bellard, et al. configuration: --extra-version=4:0.5.1-1ubuntu1 --prefix=/usr --enable-avfilter --enable-avfilter-lavf --enable-vdpau --enable-bzlib --enable-libgsm --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-pthreads --enable-zlib --disable-stripping --disable-vhook --enable-runtime-cpudetect --enable-gpl --enable-postproc --enable-swscale --enable-x11grab --enable-libdc1394 --enable-shared --disable-static libavutil 49.15. 0 / 49.15. 0 libavcodec 52.20. 1 / 52.20. 1 libavformat 52.31. 0 / 52.31. 0 libavdevice 52. 1. 0 / 52. 1. 0 libavfilter 0. 4. 0 / 0. 4. 0 libswscale 0. 7. 1 / 0. 7. 1 libpostproc 51. 2. 0 / 51. 2. 0 built on Mar 4 2010 12:35:30, gcc: 4.4.3 [avi @ 0x87952c0]non-interleaved AVI Input #0, avi, from 'input.avi': Duration: 00:00:15.24, start: 0.000000, bitrate: 22447 kb/s Stream #0.0: Video: mjpeg, yuvj422p, 720x544, 25 tbr, 25 tbn, 25 tbc Stream #0.1: Audio: pcm_s16le, 44100 Hz, stereo, s16, 1411 kb/s Output #0, avi, to 'output.avi': Stream #0.0: Video: mpeg4, yuv420p, 720x544, q=2-31, 200 kb/s, 90k tbn, 25 tbc Stream #0.1: Audio: mp2, 44100 Hz, stereo, s16, 64 kb/s Stream mapping: Stream #0.0 -> #0.0 Stream #0.1 -> #0.1 Press [q] to stop encoding frame= 0 fps= 0 q=0.0 Lsize= 143kB time=15.23 bitrate= 76.9kbits/s video:0kB audio:119kB global headers:0kB muxing overhead 20.101777% So the problem is that output does not contain any video, as evidenced by the video:0kB at the end. In all of the above cases the audio comes out fine. So my question is: What can I do to convert these files to a more modern format with more modern codecs?

    Read the article

  • Partner Webcast - Focus on Oracle Data Profiling and Data Quality 11g

    - by lukasz.romaszewski(at)oracle.com
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-ansi-language:RO;} Partner Webcast Focus on Oracle Data Profiling and Data Quality 11g February 24th, 12am  CET   Oracle offers an integrated suite Data Quality software architected to discover and correct today's data quality problems and establish a platform prepared for tomorrow's yet unknown data challenges. Oracle Data Profiling provides data investigation, discovery, and profiling in support of quality, migration, integration, stewardship, and governance initiatives. It includes a broad range of features that expand upon basic profiling, including automated monitoring, business-rule validation, and trend analysis. Oracle Data Quality for Data Integrator provides cleansing, standardization, matching, address validation, location enrichment, and linking functions for global customer data and operational business data. It ensures that data adheres to established standards that are adaptable to fit each organization's specific needs.  Both single - and double - byte data are processed in local languages to provide a unique and centralized view of customers, products and services.   During this in-person briefing, Data Integration Solution Specialists will be providing a technical overview and a walkthrough.   Agenda ·         Oracle Data Integration Strategy overview ·         A focus on Oracle Data Profiling and Oracle Data Quality for Data Integrator: o   Oracle Data Profiling o   Oracle Data Quality for Data Integrator o   Live demoo   Q&A Delivery Format  This FREE online LIVE eSeminar will be delivered over the Web and Conference Call. Registrations   received less than 24hours  prior to start time may not receive confirmation to attend. To register , click here. For any questions please contact [email protected]

    Read the article

  • How to optimize sandboxie for faster application loading

    - by user23950
    I'm referring to this one: http://www.sandboxie.com/index.php?DownloadSandboxie Do you know of any alternatives that might be better than this software. And if you don't know of any alternative. Can you show me how can I make it faster to load firefox. Or any other browser. Because it really makes application incredibly slow at loading when you run them inside a sandbox. I've also seen lots from here: http://superuser.com/questions/57129/alternative-to-sandboxie-for-64-bit-systems Do you know of any other?

    Read the article

  • [Dear Recruiter] I developed in Mo'Fusion

    - by refuctored
    Forward: Sometimes I really feel like technology recruiters have no experience or knowledge of the field they are recruting for.  A warning to those companies hiring technical recruiters -- ensure that the technical recruiters you hire to fill a position are actually technical.  Here's proof below, where I make up completely ridiculous technologies, but still have interest from the recruiter for an interview. Letter to me: Hello - Your name came up as a possible match for a long term contract Cold Fusion Developer role I have in Bothell, WA.  This role requires you to be onsite in Bothell, WA. This is  a tough role to fill so I was hoping you might have someone you can recommend? Unfortunately no telecommute. Thank you! Sincerly, Mindy Recruiter My response: Mindy -- Wow I'm super-excited that you took the time to contact me about this position!  Let me tell you, you won't be disappointed with my skill set! Firstly, I've been developing in ColdFusion since 1993 before it was owned by Adobe and it was operating under code name, "Hot-Jack".  Recently I started developing under the Domain-View-Driven-Domain-Model (DVDDM), integrating client-side CF on Moobuntu.  Not only do I have a boat load of ColdFusion EXP,  I also have a ton of experience in the open source communities lesser known derivative of CF, Mo'Fusion (MF).  I've also invested thousands of hours of my time learning esoteric programming languages. Look forward to working with you! George And her response: Hi George – just left you a message. Give me a call at your convenience.  The role does require someone to be onsite here.. are you able to relocate yourself? Mindy [Sigh]

    Read the article

  • BI&EPM in Focus June 2012

    - by Mike.Hallett(at)Oracle-BI&EPM
    General News Thomas Kurian Discusses Oracle Exalytics, SAP HANA (replay | preso | press)  Accenture & Oracle Study: The Challenges of Corporate Financial Reporting  (link) Flash Demo: Oracle Hyperion Planning on Exalytics in the Public Sector (link) Flash Demo: OBIEE & Exalytics in Retail (link) Customers Italian Partner Alfa Sistemi implemented at Autovie Venete S.p.A. Integrates Business Intelligence and Performance Management to Improve Efficiency and Speed for Managing Public Works Projects (English version)  / Autovie Venete implementa un sistema integrato di Business Intelligence e Performance Management per migliorare l’efficienza e la tempestività dell’attività di Controlling di Commessa (Italian version). FANCL Gains 360-Degree View of Customers across Multiple Sales Channels, Reduces Reports by 75% Korea Yakult Improves Profit & Loss Analysis with Oracle Hyperion Planning and OBIEE Hill International Streamlines Forecasting, Improves Visibility into Project Productivity and Profitability Children’s Rights in Society Better Supports Organizational Mission with Advanced, Integrated, and Streamlined Business Intelligence Tools Profit: International utility Enel monitors the performance of global subsidiaries with Oracle Hyperion Applications (link) Profit: Charting a New Course: Korean Air gains altitude by leveraging its greatest asset: information (link)   Events June 12: Breaking Away from the Excel Add-In: Welcome to Hyperion Smart View 11.1.2.2 (link) June 13: Upgrading OBIEE 10g to 11g: Best Practices and Lessons Learned (performance architects) (link) June 14, The Netherlands: Strategies for Business Excellence, New Release of Oracle Hyperion EPM Suite (link) June 21: Comprehensive and Accurate Forecasting for Healthcare (link) June 26: What Exactly is Exalytics? (KPI Partners) (link) Webcast Replay: Is Your Company Able to Navigate Through Market Volatility? (link)  Webcast Replay: Is Hope and Email The Core of Your Reconciliation Process? (link) Webcast Replay: Troubleshooting EPM Reporting & Analysis 11.1.2.x  (link) Webcast Replay: Is your Organization Flying Blind when it comes to Understanding Profitability?  (link) Enterprise Performance Management Final Oracle EPM  Information Panel (CIP) survey on cost, profitability and performance reporting/scorecards is now OPEN (link) New on EPM Blog: What's Going on With IFRS? (link) How does Crystal Ball integrate with EPM Solutions? New collateral and demos on Crystal Ball Solution Factory!  (link) New Youtube Video: Business Case Analysis with Oracle Crystal Ball (link) Crystal Ball 11.1.2.2 is released! Grouped Assumptions in Sensitivity Charts, Data Filtering When Fitting Distributions and Parameter Edits When Fitting Distributions to name a few. Get full details from the online New Features Guide (link) New DRM Oracle-by-Examples now available (link) Support Blog: Hyperion Ledgerlink Sample Record and Windows 7: Now you see it, now you don’t  (link) Use Enterprise Manager FMW Control to Troubleshoot Oracle EPM 11.1.2 Family of Products (link) Business  Intelligence Whitepaper: Real-Time Operational Reporting for E-Business Suite via GoldenGate Replication to an Operational Data Store.  How Oracle enabled real-time operational reporting for its $20B services contract business with Golden Gate & OBIEE (link) KPI Partners ebook: Understanding Oracle BI Components and Repository Modeling Basics (link) “Getting Started with Oracle Endeca Information Discovery” video tutorials now available (link) Oracle BI Publisher Conversion Center: Convert from Crystal, Actuate, or Oracle Reports to Oracle BI Publisher (link) Oracle Fusion Applications: Monthly Partner Updates Webcast Replays to help BI partners understand how OBI, Essbase, BI-Apps and Fusion work together: More on Fusion CRM: Fusion Marketing More on Fusion CRM: Fusion CRM Sales Start-Up Packs and Expert Services for Implementation Partners Introducing the Oracle Fusion Accounting Hub Implementing Fusion Applications using Oracle's Composers Oracle Fusion Applications Co-Existence

    Read the article

  • Time between AWS Notifying of Scale Down and Terminating instance

    - by SteveEdson
    Here is the scenario, there are multiple EC2 instances behind a load balancer. When traffic dies down, the SCALE_DOWN policy is triggered from a CloudWatch alarm. What I would like, is for the instance that is going to be terminated, or a separate server altogether, to be able to run a quick script that will execute a few commands to ensure all data has been transferred. My initial question was going to be how can I send a notification when an instance is going to be terminated by an auto scale, SCALE_DOWN policy. But then I saw this question Amazon EC2 notifying the instance when the autoscale service terminates it. If the notification is sent, how much time is there before the instance actually gets terminated? Are there any parameters to specify this time? Would it be a better idea to notify an instance that it is no longer needed, and get the instance to terminate itself once it has finished running the final script? Or, am I making this into a bigger problem than it actually is, and theres a far simpler solution?

    Read the article

  • Building a Redundant / Distrubuted Application

    - by MattW
    This is more of a "point me in the right direction" question. I (and my team of 3) have built a hosted web app that queues and routes customer chat requests to available customer service agents (It does other things as well, but this is enough background to illustrate the issue). The basic dev architecture today is: a single page ajax web UI (ASP.NET MVC) with floating chat windows (think Gmail) a backend Windows service to queue and route the chat requests this service also logs the chats, calculates service levels, etc a Comet server product that routes data between the web frontend and the backend Windows service this also helps us detect which Agents are still connected (online) And our hardware architecture today is: 2 servers to host the web UI portion of the application a load balancer to route requests to the 2 different web app servers a third server to host the SQL Server DB and the backend Windows service responsible for queuing / delivering chats So as it stands today, one of the web app servers could go down and we would be ok. However, if something would happen to the SQL Server / Windows Service server we would be boned. My question - how can I make this backend Windows service logic be able to be spread across multiple machines (distributed)? The Windows service is written to accept requests from the Comet server, check for available Agents, and route the chat to those agents. How can I make this more distributed? How can I make it so that I can distribute the work of the backend Windows service can be spread across multiple machines for redundancy and uptime purposes? Will I need to re-write it with distributed computing in mind? I should also note that I am hosting all of this on Rackspace Cloud instances - so maybe it is something I should be less concerned about? Thanks in advance for any help!

    Read the article

  • Why does 'top' say my machine is only 50% idle?

    - by Chris Moore
    What's going on here? I'm running nothing on the system, iotop and iftop show the network and hard drive are both idle, and top (sorted by %CPU) shows nothing running. So why is the system only 50% idle? What's the other 50% waiting for? How can I find out? top - 12:01:05 up 3 days, 15:03, 1 user, load average: 6.00, 6.01, 6.05 Tasks: 179 total, 1 running, 178 sleeping, 0 stopped, 0 zombie Cpu(s): 0.7%us, 0.0%sy, 0.0%ni, 49.7%id, 49.7%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 2053996k total, 1992600k used, 61396k free, 81680k buffers Swap: 4092924k total, 10740k used, 4082184k free, 1338636k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1042 deb 20 0 21468 1412 1000 R 1 0.1 0:00.03 top 1 root 20 0 24188 1952 1152 S 0 0.1 0:01.44 init 2 root 20 0 0 0 0 S 0 0.0 0:00.05 kthreadd Update: dmesg shows the printer driver misbehaving: [28858.561847] cnijnetprn[1503]: segfault at 29 ip 00007f56cf3480f7 sp 00007fffb964ec30 error 4 in libcnnet.so.1.2.0[7f56cf345000+9000] [68851.187802] cnijnetprn[9180]: segfault at 29 ip 00007ffe7636a0f7 sp 00007fff9a8b1990 error 4 in libcnnet.so.1.2.0[7ffe76367000+9000] [155412.107826] cnijnetprn[19966]: segfault at 29 ip 00007fc31de770f7 sp 00007fffc03aa8e0 error 4 in libcnnet.so.1.2.0[7fc31de74000+9000] and also some issue with cp: [248041.172067] INFO: task cp:27488 blocked for more than 120 seconds. [248041.172071] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [248041.172075] cp D ffffffff81805120 0 27488 27345 0x00000004 [248041.172080] ffff880078d57a38 0000000000000046 ffff880078d579d8 ffffffff81032a79 [248041.172085] ffff880078d57fd8 ffff880078d57fd8 ffff880078d57fd8 0000000000012a40 [248041.172090] ffff88007b818000 ffff880069acc560 ffff880078d57a18 ffff88007f8532c0 [248041.172095] Call Trace: [248041.172104] [<ffffffff81032a79>] ? default_spin_lock_flags+0x9/0x10 [248041.172109] [<ffffffff8110a360>] ? __lock_page+0x70/0x70 [248041.172114] [<ffffffff815f0ecf>] schedule+0x3f/0x60 I did try copying something to the USB stick that's plugged into the router and mounted onto this computer using mount.cifs. That almost always causes everything to lock up, so I'm guessing that's the problem. I'll reboot and stop using mount.cifs.

    Read the article

  • Unable to record sound from web browser (firefox / chromium ) using recordmydesktop

    - by thamurath
    I have to do some screencast tutorials and i am using recordmydesktop with gtk frontend to do it. I need to record also the sound and here is where i have found the problem. It took me some time, but now I can record the sound from almost every application in my desktop ... almost. I need to capture some sound from a web application using java, but when i load the page nothing appears in the playback tab of pavucontrol. I think this is the problem, because if there is no sound stream i think the recordmydesktop program thinks there is no sound to record ... the funny thing is that I can ear the sound in my speakers! I have tried with Firefox and Chromium with no success. Although I have been able to record youtube videos without problem, so it seems that java is the key here. Any suggestion or idea? P.S.: I am using Ubuntu 11.10 with this configuration. ( if more information is needed please let me know) sight i cannot post images ... so I have an audigy2 sound card using Analog Stereo Output profile. I have also an "Internal Audio" device, but i have it with the "Off" profile. In recordmydesktop-Advanced-Sound: Device = default

    Read the article

  • when to use squid on server side?

    - by ajsie
    so i have set up apache serving my php pages. i read about squid but don't understand why/how i should use it to speed up my web server. from what i've learned squid is located in same network (or another) and caches content requested by the web browsers, and then when another web browser wants a same page, squid returns that page cached locally, so it never sends a request to the apache server (faster response time for the client, and reduced load for the server). so it seems that squid is for the client side (web browser), and has nothing to do with the server side (apache). but then some people tell others how they have speeded up apache using squid. so im confused. could squid be used on the server side too? and how will it work?

    Read the article

  • Experience of Python's “PEP-302 New Import Hooks”

    - by Koichi Sasada
    I'm one of the developers of Ruby (CRuby). We are working on Ruby 2.0 release (planned to release 2012/Feb). Python has "PEP302: New Import Hooks" (2003): This PEP proposes to add a new set of import hooks that offer better customization of the Python import mechanism. Contrary to the current import hook, a new-style hook can be injected into the existing scheme, allowing for a finer grained control of how modules are found and how they are loaded. We are considering introducing a feature similar to PEP302 into Ruby 2.0 (CRuby 2.0). I want to make a proposal which can persuade Matz. Currently, CRuby can load scripts from only file systems in a standard way. If you have any experience or consideration about PEP 302, please share. Example: It's a great spec. No need to change it. It is almost good, but it has this problem... If I could go back to 2003, then I would change the spec to... I'm sorry if such a question is not suitable for here. I posted here because I'm not sure that I can ask this question at python-dev (of course, the list is not for cruby development). This post is moved from http://stackoverflow.com/questions/11188229/experience-of-pythons-pep-302-new-import-hooks.

    Read the article

  • Grid pathfinding with a lot of entities

    - by Vee
    I'd like to explain this problem with a screenshot from a released game, DROD: Gunthro's Epic Blunder, by Caravel Games. The game is turn-based and tile-based. I'm trying to create something very similar (a clone of the game), and I've got most of the fundamentals done, but I'm having trouble implementing pathfinding. Look at the screenshot. The guys in yellow are friendly, and want to kill the roaches. Every turn, every guy in yellow pathfinds to the closest roach, and every roach pathfinds to the closest guy in yellow. By closest I mean the target with the shortest path, not a simple distance calculation. All of this without any kind of slowdown when loading the level or when passing turns. And all of the entities change position every turn. Also (not shown in screenshot), there can be doors that open and close and change the level's layout. Impressive. I've tried implementing pathfinding in my clone. First attempt was making every roach find a path to a yellow guy every turn, using a breadth-first search algorithm. Obviously incredibly slow with more than a single roach, and would get exponentially slower with more than a single yellow guy. Second attempt was mas making every yellow guy generate a pathmap (still breadth-first search) every time he moved. Worked perfectly with multiple roaches and a single yellow guy, but adding more yellow guys made the game slow and unplayable. Last attempt was implementing JPS (jump point search). Every entity would individually calculate a path to its target. Fast, but with a limited number of entities. Having less than half the entities in the screenshot would make the game slow. And also, I had to get the "closest" enemy by calculating distance, not shortest path. I've asked on the DROD forums how they did it, and a user replied that it was breadth-first search. The game is open source, and I took a look at the source code, but it's C++ (I'm using C#) and I found it confusing. I don't know how to do it. Every approach I tried isn't good enough. And I believe that DROD generates global pathmaps, somehow, but I can't understand how every entity find the best individual path to other entities that move every turn. What's the trick? This is a reply I just got on the DROD forums: Without having looked at the code I'd wager it's two (or so) pathmaps for the whole room: One to the nearest enemy, and one to the nearest friendly for every tile. There's no need to make a separate pathmap for every entity when the overall goal is "move towards nearest enemy/friendly"... just mark every tile with the number of moves it takes to the nearest target and have the entity chose the move that takes it to the tile with the lowest number. To be honest, I don't understand it that well.

    Read the article

  • xDebug on Zend Server CE under Windows XP

    - by Hippyjim
    I have Zend Server installed on my Windows XP development machine, installed when I was naive and didn't know that Eclipse was going to become so suck so badly for PHP development. I've made the upgrade to Netbeans, but for debugging they only support xDebug. To be fair I've never used "proper" debuggers before, but other folks have raved about them so I thought I'd give it a try. I followed some directions on the Zend forum about how to install xDebug on Zend server, disabling Zend Debugger in the process. The xDebug "custom installation instructions" wizard tells me that my PHP was compiled with an unsupported compiler (MS VC8), and won't let me download anything. I tried a couple of the other xDebug binaries, but they just refused to load. So I'm left without a debugger option. Does anyone know how I can change the compiler of the php version I have installed so I can use a debugger in Netbeans? or how else i can get xDebug to install on Zend Server?

    Read the article

  • /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor disappeared on ubuntu 11.10

    - by Bob
    I have an Ubuntu 11.10 server that has been up for 210 days. I have been frequently doing apt-get upgrade every few weeks, and this time I noticed that my server load average just shot up. The last time this happened between upgrades, it was because the cpu scaling governor was set to ondemand. But this time when I tried to list the contents of /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor the file is missing. There isn't even a cpufreq folder anymore! How do I fix this and ensure there is no cpu scaling going on?

    Read the article

  • ASP.NET High CPU Bringing Servers to their Knees

    - by user880954
    Ok, our new build is having 100% cpu spikes on each server at random intervals. For long durations it make the site totally unresponsive - this will be at peak times as people in different countries log on to the site etc. We've looked at perfmom, memory profilers, CLR profiler, sql profilers, Red gate ants profiler, tried load testing in UAT - but cannot even reproduce the problem. This could mean only thousands of users hitting the live site causes it to happen. One pattern we did notice was that the new code - the broken build - actually uses noticably less threads. We are also using spring for IOC - does this have a bed reputation? To make things worse, we cannot deploy to live due to the business impact - so cannot narrow the problem down to subset of the new features we've added. We truly are destroyed - has anyone got any battle scars that may save us a few lives?

    Read the article

  • need for tcp fine-tuning on heavily used proxy server

    - by Vijay Gharge
    Hi all, I am using squid like Internet proxy server on RHEL 4 update 6 & 8 with quite heavy load i.e. 8k established connections during peak hour. Without depending much on application provider's expertise I want to achieve maximum o/p from linux. W.r.t. that I have certain questions as following: How to find out if there is scope for further tcp fine-tuning (without exhausting available resources) as the benchmark values given by vendor looks poor! Is there any parameter value that is available from OS / network stack that will show me the results. If at all there is scope, how shall I identify & configure OS tcp stack parameters i.e. using sysctl or any specific parameter Post tuning how shall I clearly measure performance enhancement / degradation ?

    Read the article

  • Is taking a semester or year off from college a good idea?

    - by astrieanna
    I am currently a Junior majoring in Computer Science at a top university (in the USA). As I'm really getting tired of taking classes, I was wondering if taking a semester or year off to do an internship(s) is a reasonable idea? It seems like it would give me more experience programming (making classes a bit easier), and give me a chance to recover from the burnout that comes from taking 18 credits a semester. A friend suggested that I just take a lighter course load, but I only have 2 more semesters of financial aid, so I need to take 18 credits in each of them in order to finish. Taking time off from school is not a normal thing to do, at least at this school. Since more internships are advertised for the summer (that I've seen), I was wondering if there are internships available in times other than the summer? If I took off for a whole year, would it be more valuable to try to stay at the same company for the whole time or to try to get a series of internships at different ones? Valuable in both the sense of resume value and personal value. Would it be easier or harder to get multiple shorter internships?

    Read the article

  • How can I upgrade my server's kernel without rebooting?

    - by Oli
    This is a loaded question because I'm already aware of, and am very interested in ksplice. The problem is that since they were bought by Oracle, they have been forced to pull numerous server distributions from the offerings. The answer isn't as simple as it once was. I noticed a question on Unix.SE that states: You can build your own ksplice patches to dynamically load into your own kernel Great! But how?! I've installed the free ksplice package in the repo on my desktop (not ksplice-uptrack which is non-free) and now want to generate and apply updates. What's the process? Are there any scripts out there to automate the process? Moreover, if all the machinery required for rebootless upgrades is sitting there in the kernel (and ksplice package), why on earth aren't we taking advantage of it by default? Note 1: I am happy for a solution beside ksplice but it has to deliver the same thing: rolling updates to the kernel that can be applied without rebooting the server. Note 2: I'll say it again; the main ksplice "service" does not support Ubuntu Server. It used to but it doesn't any more. When I talk about wanting to use ksplice, I'm talking about the open source tools in the ksplice package. Any answer that talks about ksplice-uptrack is probably not what I'm after as this is the part that integrates directly with aforementioned "service".

    Read the article

  • prevent apt dependency from being satisfied (permanently)

    - by Bryan Hunt
    I want to install mailman (just to use it's mail archiving feature) but Ubuntu wants to pull down a load of extra dependencies. sudo apt-get install mailman Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: apache2 apache2-mpm-worker apache2.2-common Suggested packages: apache2-doc apache2-suexec apache2-suexec-custom spamassassin lynx listadmin Is there any way to mark those packages ( apache2 apache2-mpm-worker apache2.2-common ) as never to be installed? This is not 2002 ;)

    Read the article

  • apache2 slow responding (debian)

    - by baloo
    I'm running an apache2 2.2.9 webserver with modpython and mpm_worker_module. The current config for the mpm is ServerLimit 32 StartServers 10 MaxClients 800 MinSpareThreads 25 MaxSpareThreads 75 ThreadsPerChild 25 MaxRequestsPerChild 0 The server has 1G of ram and a 100Mbit connection. Checking netstat -na | grep ESTABLISHED | wc -l gives me a number between 50 - 60. The load is about 1.0 Every pageload is also cached by memcached. I can't see why the server is so slow in responding to new connections, sometimes droping them completely? Also tried disabling iptables to make sure it's not because of a full state table or something like that. The only thing in dmesg is a lot of spam about "TCP: Treason uncloaked!"

    Read the article

  • How to Install Windows Media Player 11 on Wine

    - by namkid
    I am battling to install Windows Media Player 11 on Wine. I have tried the following: Open a Terminal (Applications, Accessories, Terminal) and type "sudo apt-get install wine." This installs Wine Windows Emulator, a free application that allows you to run many Windows programs within Linux. Download Windows Media Player 11 for Windows XP (link in Resources) and save it to Ubuntu's desktop. Once downloaded, right-click and select "Open with Wine Windows Emulator." Follow the on-screen prompts for installing it to your system. Go to "Applications" then "Wine," select "Programs" and open "Windows Media Player." Click "File" then "Open" and locate a DRM file you want to play. Select "OK" to load it into Media Player. I installed Wine (Which is step 1). But I am having problems with step 2 (Download Windows Media Player 11 for Windows XP (link in Resources) and save it to Ubuntu's desktop). I'm just not finding a way to do it. I may be overlooking the (link in Resources) Can't find it. I am stuck!

    Read the article

< Previous Page | 548 549 550 551 552 553 554 555 556 557 558 559  | Next Page >