Search Results

Search found 22447 results on 898 pages for 'cpu load'.

Page 435/898 | < Previous Page | 431 432 433 434 435 436 437 438 439 440 441 442  | Next Page >

  • Understanding the 'High Performance' meaning in Extreme Transaction Processing

    - by kyap
    Despite my previous blogs entries on SOA/BPM and Identity Management, the domain where I'm the most passionated is definitely the Extreme Transaction Processing, commonly called XTP.I came across XTP back to 2007 while I was still FMW Product Manager in EMEA. At that time Oracle acquired a company called Tangosol, which owned an unique product called Coherence that we renamed to Oracle Coherence. Beside this innovative renaming of the product, to be honest, I didn't know much about it, except being a "distributed in-memory cache for Extreme Transaction Processing"... not very helpful still.In general when people doesn't fully understand a technology or a concept, they tend to find some shortcuts, either correct or not, to justify their lack-of understanding... and of course I was part of this category of individuals. And the shortcut was "Oracle Coherence Cache helps to improve Performance". Excellent marketing slogan... but not very meaningful still. By chance I was able to get away quickly from that group in July 2007* at Thames Valley Park (UK), after I attended one of the most interesting workshops, in my 10 years career in Oracle, delivered by Brian Oliver. The biggest mistake I made was to assume that performance improvement with Coherence was related to the response time. Which can be considered as legitimus at that time, because after-all caches help to reduce latency on cached data access, hence reduce the response-time. But like all caches, you need to define caching and expiration policies, thinking about the cache-missed strategy, and most of the time you have to re-write partially your application in order to work with the cache. At a result, the expected benefit vanishes... so, not very useful then?The key mistake I made was my perception or obsession on how performance improvement should be driven, but I strongly believe this is still a common problem to most of the developers. In fact we all know the that the performance of a system is generally presented by the Capacity (or Throughput), with the 2 important dimensions Speed (response-time) and Volume (load) :Capacity (TPS) = Volume (T) / Speed (S)To increase the Capacity, we can either reduce the Speed(in terms of response-time), or to increase the Volume. However we tend to only focus on reducing the Speed dimension, perhaps it is more concrete and tangible to measure, and nicer to present to our management because there's a direct impact onto the end-users experience. On the other hand, we assume the Volume can be addressed by the underlying hardware or software stack, so if we need more capacity (scale out), we just add more hardware or software. Unfortunately, the reality proves that IT is never as ideal as we assume...The challenge with Speed improvement approach is that it is generally difficult and costly to make things already fast... faster. And by adding Coherence will not necessarily help either. Even though we manage to do so, the Capacity can not increase forever because... the Speed can be influenced by the Volume. For all system, we always have a performance illustration as follow: In all traditional system, the increase of Volume (Transaction) will also increase the Speed (Response-Time) as some point. The reason is simple: most of the time the Application logics were not designed to scale. As an example, if you have a while-loop in your application, it is natural to conceive that parsing 200 entries will require double execution-time compared to 100 entries. If you need to "Speed-up" the execution, you can only upgrade your hardware (scale-up) with faster CPU and/or network to reduce network latency. It is technically limited and economically inefficient. And this is exactly where XTP and Coherence kick in. The primary objective of XTP is about designing applications which can scale-out for increasing the Volume, by applying coding techniques to keep the execution-time as constant as possible, independently of the number of runtime data being manipulated. It is actually not just about having an application running as fast as possible, but about having a much more predictable system, with constant response-time and linearly scale, so we can easily increase throughput by adding more hardwares in parallel. It is in general combined with the Low Latency Programming model, where we tried to optimize the network usage as much as possible, either from the programmatic angle (less network-hoops to complete a task), and/or from a hardware angle (faster network equipments). In this picture, Oracle Coherence can be considered as software-level XTP enabler, via the Distributed-Cache because it can guarantee: - Constant Data Objects access time, independently from the number of Objects and the Coherence Cluster size - Data Objects Distribution by Affinity for in-memory data grouping - In-place Data Processing for parallel executionTo summarize, Oracle Coherence is indeed useful to improve your application performance, just not in the way we commonly think. It's not about the Speed itself, but about the overall Capacity with Extreme Load while keeping consistant Speed. In the future I will keep adding new blog entries around this topic, with some sample codes experiences sharing that I capture in the last few years. In the meanwhile if you want to know more how Oracle Coherence, I strongly suggest you to start with checking how our worldwide customers are using Oracle Coherence first, then you can start playing with the product through our tutorial.Have Fun !

    Read the article

  • How can I delay dropbox from starting, but not disable it?

    - by jgbelacqua
    When I log into my user account on Ubuntu 10.10, there is a unsatisfying delay before my system becomes usable. Even launching a terminal, I have to wait a few seconds before the bash prompt appears. During this start-up period, the top process seems to be dropbox. I'm not sure what it's doing exactly (functionality is still fine as far as I can see), but I do know it really doesn't need to be doing it while I'm waiting for desktop to appear. (This is the standard Ubuntu with Gnome desktop, by the way.) What I would like to do is to be able to have a static or even dependency-based delay for dropbox to start. It would be nice if it waited for, e.g., 10 minutes, or for my browser tabs to load and a typing pause. Then it could churn away on file status or cache-chewing, and I would be happy. Is there a way to do this? Thanks!

    Read the article

  • audio controls in xfce 4.8

    - by Peter
    I am seeing several questions similar to mine, but none of the answers are sufficient. I am pretty green with Ubuntu, so here goes: I was just automatically upgraded to xfce 4.8 for Ubuntu studio. The volume control no longer works in my panel. When I launch 'mixer' I don't see any settings, either. When I try to run "linux audio configuration" I get an error: JACK can only be configured with a loaded and stopped studio. Please create a new studio or load and stop an existing one. I understand that I can change the volume using command line, but I can't understand why I got upgraded to something that fails on basic features. I much less likely to recommend ubuntu to others as a result. thanks!

    Read the article

  • Should we consider code language upon design?

    - by Codex73
    Summary This question aims to conclude if an applications usage will be a consideration when deciding upon development language. What factors if any could be considered upon language writing could be taken into context. Application Type: Web Question Of the following popular languages, when should we use one or the other? What factors if any could be considered upon language writing could be taken into context. Languages PHP Ruby Python My initial thought is that language shouldn't be considered as much as framework. Things to consider on framework are scalability, usage, load, portability, modularity and many more. Things to consider on Code Writing maybe cost, framework stability, community, etc.

    Read the article

  • Windows 7 not booting from Ubuntu 12.10 boot menu

    - by Raj Inevitable
    Am having windows 7 in one hard drive. I installed Ubuntu in second hard drive. Am configured BIOS to boot second hard drive first (Ubuntu OS). and then i updated grub, so its shows windows 7 in the boot list. I can boot in to ubuntu, but i can't boot into windows 7, its shows error A disk read error occured Press Ctrl+Alt+Del to restart . then am configured BIOS again to load windows 7 first, it shows windows 7 and ubuntu in the boot list, windows 7 is working, but i can't boot into ubuntu.. Help to solve this problem.. i want dual boot from any one of the drive...

    Read the article

  • Mounting NFS directory causes creating Zero byte files

    - by Alaa
    I have two Servers, Server X (IP 192.168.1.1) and Server Y (IP 192.168.1.2), both of them are ubuntu 9.1 i have created varnish load balancer on them for my drupal website (pressflow 6.22) I have mounted a directory of imagecache from server X to Y as below @X:/etc/exports == /var/www/proj/htdocs/sites/default/files/images 192.168.1.2(rw,async,no_subtree_check) @Y:/etc/fstab == 192.168.1.1:/var/www/proj/htdocs/sites/default/files/images var/www/proj/htdocs/sites/default/files/images nfs defaults 0 0 also i made this on server X X:/var/www/proj/htdocs/sites/default/files$ chmod -R 777 images i tried to touch, rm, vim, and cat files in images directory that has been mounted on Y and everything went fine. now, ALWAYS when server Y's imagecache tries to create an image in images directory, the image is created with ZERO byte file size. anyone face the same before? any idea of how to fix this problem or what might cause it? Thanks for your help

    Read the article

  • Javascript naming conventions

    - by ManuPK
    I am from Java background and am new to JavaScript. I have noticed many JavaScript methods using single character parameter names, such as in the following example. doSomething(a,b,c) I don't like it, but a fellow JavaScript developer convinced me that this is done to reduce the file size, noting that JavaScript files have to be transferred to the browser. Then I found myself talking to another developer. He showed me the way that Firefox will truncate variable names to load the page faster. Is this a standard practice for web browsers? What are the best-practice naming conversions that should be followed when programming in JavaScript? Does identifier length matter, and if so, to what extent?

    Read the article

  • Tp_smapi on Thinkpad T440

    - by user2597381
    I have a Thinkpad T440 with Ubuntu 14.04 (using i3wm). I have had trouble installing tp_smapi. I successfully installed it with dpkg -- can't remember whether I used apt or just downloaded it somewhere -- but I can't load the module. When I use modprobe on tp_smapi, I get this: "modprobe: ERROR: could not insert 'tp_smapi': No such device or address" Note that this is different from the error I get when I mistype the module name. I read somewhere that I might have to rebuild my kernel for it to work, but I'd thought that the kernel I was using (3.13.0-24) had that already set up. Has anyone gotten it to work? I hate how Lenovo manages the batteries!

    Read the article

  • VLC doesnot play any file [ any video/.mp3] in local machine , closes when a file is opened

    - by hsemarap
    When I run vlc from terminal I get: In the VLC dialog box: Your input can't be opened: VLC is unable to open the MRL 'file:///media/Ent/movies/the%20mask.avi'. Check the log for details. In terminal: VLC media player 2.0.1 Twoflower (revision 2.0.1-0-gf432547) [0x8fe8f8] main input error: open of `file/xspf-open:///home/para/.local/share/vlc/ml.xspf' failed [0x8fe8f8] main input error: Your input can't be opened [0x8fe8f8] main input error: VLC is unable to open the MRL 'file/xspf-open:///home/para/.local/share/vlc/ml.xspf'. Check the log for details. [0x8f1aa8] main interface error: no suitable interface module [0x8f9b08] main interface error: no suitable interface module [0x8be008] main libvlc error: option http-user-agent does not exist [0x8be008] main libvlc: Running vlc with the default interface. Use 'cvlc' to use vlc without interface. [0x8f9b08] qt4 interface error: Unable to load extensions module [0x7f5280000b78] main input error: open of `file:///media/Ent/movies/the%20mask.avi' failed [0x8fc718] main playlist error: could not export playlist

    Read the article

  • SQL Server - Rebuilding Indexes

    - by Renso
    Goal: Rebuild indexes in SQL server. This can be done one at a time or with the example script below to rebuild all index for a specified table or for all tables in a given database. Why? The data in indexes gets fragmented over time. That means that as the index grows, the newly added rows to the index are physically stored in other sections of the allocated database storage space. Kind of like when you load your Christmas shopping into the trunk of your car and it is full you continue to load some on the back seat, in the same way some storage buffer is created for your index but once that runs out the data is then stored in other storage space and your data in your index is no longer stored in contiguous physical pages. To access the index the database manager has to "string together" disparate fragments to create the full-index and create one contiguous set of pages for that index. Defragmentation fixes that. What does the fragmentation affect?Depending of course on how large the table is and how fragmented the data is, can cause SQL Server to perform unnecessary data reads, slowing down SQL Server’s performance.Which index to rebuild?As a rule consider that when reorganize a table's clustered index, all other non-clustered indexes on that same table will automatically be rebuilt. A table can only have one clustered index.How to rebuild all the index for one table:The DBCC DBREINDEX command will not automatically rebuild all of the indexes on a given table in a databaseHow to rebuild all indexes for all tables in a given database:USE [myDB]    -- enter your database name hereDECLARE @tableName varchar(255)DECLARE TableCursor CURSOR FORSELECT table_name FROM information_schema.tablesWHERE table_type = 'base table'OPEN TableCursorFETCH NEXT FROM TableCursor INTO @tableNameWHILE @@FETCH_STATUS = 0BEGINDBCC DBREINDEX(@tableName,' ',90)     --a fill factor of 90%FETCH NEXT FROM TableCursor INTO @tableNameENDCLOSE TableCursorDEALLOCATE TableCursorWhat does this script do?Reindexes all indexes in all tables of the given database. Each index is filled with a fill factor of 90%. While the command DBCC DBREINDEX runs and rebuilds the indexes, that the table becomes unavailable for use by your users temporarily until the rebuild has completed, so don't do this during production  hours as it will create a shared lock on the tables, although it will allow for read-only uncommitted data reads; i.e.e SELECT.What is the fill factor?Is the percentage of space on each index page for storing data when the index is created or rebuilt. It replaces the fill factor when the index was created, becoming the new default for the index and for any other nonclustered indexes rebuilt because a clustered index is rebuilt. When fillfactor is 0, DBCC DBREINDEX uses the fill factor value last specified for the index. This value is stored in the sys.indexes catalog view. If fillfactor is specified, table_name and index_name must be specified. If fillfactor is not specified, the default fill factor, 100, is used.How do I determine the level of fragmentation?Run the DBCC SHOWCONTIG command. However this requires you to specify the ID of both the table and index being. To make it a lot easier by only requiring you to specify the table name and/or index you can run this script:DECLARE@ID int,@IndexID int,@IndexName varchar(128)--Specify the table and index namesSELECT @IndexName = ‘index_name’    --name of the indexSET @ID = OBJECT_ID(‘table_name’)  -- name of the tableSELECT @IndexID = IndIDFROM sysindexesWHERE id = @ID AND name = @IndexName--Show the level of fragmentationDBCC SHOWCONTIG (@id, @IndexID)Here is an example:DBCC SHOWCONTIG scanning 'Tickets' table...Table: 'Tickets' (1829581556); index ID: 1, database ID: 13TABLE level scan performed.- Pages Scanned................................: 915- Extents Scanned..............................: 119- Extent Switches..............................: 281- Avg. Pages per Extent........................: 7.7- Scan Density [Best Count:Actual Count].......: 40.78% [115:282]- Logical Scan Fragmentation ..................: 16.28%- Extent Scan Fragmentation ...................: 99.16%- Avg. Bytes Free per Page.....................: 2457.0- Avg. Page Density (full).....................: 69.64%DBCC execution completed. If DBCC printed error messages, contact your system administrator.What's important here?The Scan Density; Ideally it should be 100%. As time goes by it drops as fragmentation occurs. When the level drops below 75%, you should consider re-indexing.Here are the results of the same table and clustered index after running the script:DBCC SHOWCONTIG scanning 'Tickets' table...Table: 'Tickets' (1829581556); index ID: 1, database ID: 13TABLE level scan performed.- Pages Scanned................................: 692- Extents Scanned..............................: 87- Extent Switches..............................: 86- Avg. Pages per Extent........................: 8.0- Scan Density [Best Count:Actual Count].......: 100.00% [87:87]- Logical Scan Fragmentation ..................: 0.00%- Extent Scan Fragmentation ...................: 22.99%- Avg. Bytes Free per Page.....................: 639.8- Avg. Page Density (full).....................: 92.10%DBCC execution completed. If DBCC printed error messages, contact your system administrator.What's different?The Scan Density has increased from 40.78% to 100%; no fragmentation on the clustered index. Note that since we rebuilt the clustered index, all other index were also rebuilt.

    Read the article

  • Funky Behavior NoTracking Query Entities

    I’ve been using a lot of NoTracking queries to grab lists of data that I don’t need change tracked. It enhances performance and cuts down on resources. There are some nuances about these entities, however. One of the interesting behaviors of EF4’s Lazy Loading is that even if you have entities that you have queried with NoTracking on, they will still lazy load related entities. Unless you’ve read this somewhere (it’s on the ADO.NET team’s blog post which introduces...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Simple alternating text in Visual Basic 2008

    - by Josh Grate
    I have a simple completed program, but i would like to add one more feature to it but I'm not sure how. I have it set up to send a message automatically every 7 seconds when a text field is selected, repeating the message of course. What I would like for it to do is alternate between two separate messages, instead just repeating the one. I would like the new program to post at an interval of 12 seconds. Can you help me? Here is my coding. Public Class Form1 Private Sub Timer1_Tick(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Timer1.Tick SendKeys.Send(TextBox1.Text) SendKeys.Send("{ENTER}") End Sub Private Sub Button1_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button1.Click Timer1.Enabled = True Timer1.Interval = (TextBox2.Text) End Sub Private Sub Button2_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button2.Click Timer1.Enabled = False End Sub Private Sub Form1_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load Timer1.Enabled = False End Sub End Class

    Read the article

  • How can a collection class instantiate many objects with one database call?

    - by Buttle Butkus
    I have a baseClass where I do not want public setters. I have a load($id) method that will retrieve the data for that object from the db. I have been using static class methods like getBy($property,$values) to return multiple class objects using a single database call. But some people say that static methods are not OOP. So now I'm trying to create a baseClassCollection that can do the same thing. But it can't, because it cannot access protected setters. I don't want everyone to be able to set the object's data. But it seems that it is an all-or-nothing proposition. I cannot give just the collection class access to the setters. I've seen a solution using debug_backtrace() but that seems inelegant. I'm moving toward just making the setters public. Are there any other solutions? Or should I even be looking for other solutions?

    Read the article

  • IDirect3DDevice9Ex and D3DPOOL_MANAGED?

    - by bluescrn
    So I wanted to switch to IDirect3DDevice9Ex, purely for the SetFrameLatency function, as fullscreen vsynced D3D seemed to produce noticable input lag. But then it tells me 'ha ha ha! now you can't use D3DPOOL_MANAGED!': Direct3D9: (ERROR) :D3DPOOL_MANAGED is not valid with IDirect3DDevice9Ex Is this really as unpleasant as it looks (when you're relying quite heavily on managed resources) - or is there a simple solution? If it really does mean manual management of everything (reloading all static textures, VBs, and IBs on a device reset), is it worth the hassle, will IDirect3DDevice9Ex bring enough benefit to make it worth writing a new resource manager? Starting to think I must be doing something wrong, due to this: Direct3D9: (ERROR) :Lock is not supported for textures allocated with POOL_DEFAULT unless they are marked D3DUSAGE_DYNAMIC. So if I put my (static) textures in POOL_DEFAULT, they need flagging as D3DUSAGE_DYNAMIC, just because I lock them once to load the data in?

    Read the article

  • How do I handle "Firmware file X not found" stopping bootup?

    - by John Baber
    I recently upgraded my desktop mac to Precise and now can't get past the Ubuntu 12.04 splash page. The splash freezes with [ 19.931097] b32-phy0 ERROR: Firmware file "b43/ucode16_mimo.fw" not found [ 19.9311126] b32-phy0 ERROR: You must go to http://wireless.kernel.org/en/users/Drivers/b43#devicefirmware and download the correct firmware for this driver version. Please carefully read all instructions on this website. written on it This is when I choose 3.2.0-generic from the grub menu. If I choose to load up older kernels, I still get the same thing. How can I make my computer finish booting again? I'm able to ssh into the machine and poke around, but the actual screen freezes at the splash.

    Read the article

  • Boot problem with Ubuntu 11.10

    - by leonard
    So, I am an Ubuntu newbie, and I don't know a thing about it. I tried to install restricted extras (am I using it right?) through terminal, but it seems I failed :) maybe because during installation the internet connection crashed... anyway, I restarted and I got the Ubuntu intro screen at the boot, and then a black screen appeared with this information: checking battery state I restarted and it showed something else, so I rebooted for 5 times. Then a screen appeared *starting bluetooth *pulseaudio configured for per-user sessions saned disabled; edit /etc/default/saned I tried this: My fresh installation doesn't load. (PulseAudio problem) but didn't know how to save so I just skipped to update-grub, anyway nothing helped. What do I do?

    Read the article

  • Google BigQuery - Best Practices for Loading your Data and open Office Hours

    Google BigQuery - Best Practices for Loading your Data and open Office Hours Michael Manoochehri and Ryan Boyd from the DevRel team for cloud data services will be streaming to you live! They'll be discussing how to load your data into BigQuery and the various options available -- from commercial ETL tools to App Engine's Pipeline API and MapReduce frameworks, to simple UNIX command-line tools. They'll then open it up for a general office hours on ingestion and other topics. Please use the moderator link to ask your questions. From: GoogleDevelopers Views: 0 0 ratings Time: 00:00 More in Science & Technology

    Read the article

  • How to use OpenGL functions from multiples thread?

    - by Robert
    I'm writing a small game using OpenGL. I'm implementing basic networking in this game and I'm facing a problem. I have a thread in my client socket class that check for available data, when there are data I raise an event like this : immutable int len = this.m_socket.receive(data); if(len > 0) { this.m_onDataEvent(data); } Then on my game class, I have a function that handle and parse data like this : switch(msgId) { case ProtocolID.CharacterData: // Load terrain with opengl, character model.... Im not able to call opengl functions because my opengl context is created from a different thread. But I really don't know how I can solve this problem, I tried Google but it's really hard to find a solution. I'm using D programming language if it can help.

    Read the article

  • Canon Multitasking printer: Scanner doesn't work, printer does

    - by Holger Böttcher
    Model: Canon Prixma MG2260 I have installed to ubuntu 14.04 Trusty Tahr. Now I have the problem by my multi function device. The printer works, but the scanner doesn't! I have downloaded the Linux software of Canon for the 2200 series. I have downloaded the software of the Internet site of Canon. 6 Zip folder 3 deb. and 3 rpm. Always 1 for computer, 1 for scanner, 1 for other Thinks. I can load them on my PC and unpacking, but they do not install themselves. Does any person know what I have to do about this?

    Read the article

  • SQLIO Writes

    - by Grant Fritchey
    SQLIO is a fantastic utility for testing the abilities of the disks in your system. It has a very unfortunate name though, since it's not really a SQL Server testing utility at all. It really is a disk utility. They ought to call it DiskIO because they'd get more people using I think. Anyway, branding is not the point of this blog post. Writes are the point of this blog post. SQLIO works by slamming your disk. It performs as mean reads as it can or it performs as many writes as it can depending on how you've configured your tests. There are much smarter people than me who will get into all the various types of tests you should run. I'd suggest reading a bit of what Jonathan Kehayias (blog|twitter) has to say or wade into Denny Cherry's (blog|twitter) work. They're going to do a better job than I can describing all the benefits and mechanisms around using this excellent piece of software. My concerns are very focused. I needed to set up a series of tests to see how well our product SQL Storage Compress worked. I wanted to know the effects it would have on a system, the disk for sure, but also memory and CPU. How to stress the system? SQLIO of course. But when I set it up and ran it, following the documentation that comes with it, I was seeing better than 99% compression on the files. Don't get me wrong. Our product is magnificent, wonderful, all things great and beautiful, gets you coffee in the morning and is made mostly from bacon. But 99% compression. No, it's not that good. So what's up? Well, it's the configuration. The default mechanism is to load up a file, something large that will overwhelm your disk cache. You're instructed to load the file with a character 0x0. I never got a computer science degree. I went to film school. Because of this, I didn't memorize ASCII tables so when I saw this, I thought it was zero's or something. Nope. It's NULL. That's right, you're making a very large file, but you're filling it with NULL values. That's actually ok when all you're testing is the disk sub-system. But, when you want to test a compression and decompression, that can be an issue. I got around this fairly quickly. Instead of generating a file filled with NULL values, I just copied a database file for my tests. And to test it with SQL Storage Compress, I used a database file that had already been run through compression (about 40% compression on that file if you're interested). Now the reads were taken care of. I am seeing very realistic performance from decompressing the information for reads through SQLIO. But what about writes? Well, the issue is, what does SQLIO write? I don't have access to the code. But I do have access to the results. I did two different tests, just to be sure of what I was seeing. First test, use the .DAT file as described in the documentation. I opened the .DAT file after I was done with SQLIO, using WordPad. Guess what? It's a giant file full of air. SQLIO writes NULL values. What does that do to compression? I did the test again on a copy of an uncompressed database file. Then I ran the original and the SQLIO modified copy through ZIP to see what happened. I got better than 99% compression out of the SQLIO modified file (original file of 624,896kb went to 275,871kb compressed, after SQLIO it went to 608kb compressed). So, what does SQLIO write? It writes air. If you're trying to test it with compression or maybe some other type of file storage mechanism like dedupe, you need to know this because your tests really won't be valid. Should I find some other mechanism for testing? Yeah, if all I'm interested in is establishing performance to my own satisfaction, yes. But, I want to be able to compare my results with other people's results and we all need to be using the same tool in order for that to happen. SQLIO is the common mechanism that most people I know use to establish disk performance behavior. It'd be better if we could get SQLIO to do writes in some other fashion. Oh, and before I go, I get to brag a bit. Measuring IOPS, SQL Storage Compress outperforms my disk alone by about 30%.

    Read the article

  • help bonding streaming rtp 3g

    - by enrique
    first sorry for contact me here. Recuro to you after reading all the material I found about it and so it does not get set. My question is: I can configure load balancing in any way out? I have a hub with 3 USB 3G modems, I got the 3 simultaneously connect with an upload speed of about 500kb in each approx. and a dynamic ip each. I do a unicast streaming with vlc rtp with a bandwidth of 1.5mb. Bone the sum of the three modems. I was searching on ifenslave, iproute. Then I found a draft vlc MultiCat. I understood that this could end, but configure it only moves a card. If I can help extend the information willingly. From now eternally grateful.

    Read the article

  • How to Disable or Uninstall Android Bloatware

    - by Chris Hoffman
    Manufacturers and carriers often load Android phones with their own apps. If you don’t use them, they just clutter your system and sometimes in the background, draining resources. Take control of your device and stop the bloatware. We’ll be focusing on disabling – also known as “freezing” bloatware here. It’s a safer process than uninstalling the bloatware completely, and is also easier to accomplish with free apps. HTG Explains: Learn How Websites Are Tracking You Online Here’s How to Download Windows 8 Release Preview Right Now HTG Explains: Why Linux Doesn’t Need Defragmenting

    Read the article

  • How to crawl a webPage with dynamic content added by javascript

    - by blunderboy
    I guess there is a news that Google bots have the capability to understand our javascript code. It means this is possible to fully crawl a webpage which has lazy loading feature enabled. I am using Apache Nutch to crawl websites but I don't think it has the capability to fetch the URLs being injected in HTML page by javascript when the page is scrolled down. I see a lot of websites doing lazy loading for performance issue. So Can somebody please explain me how can i crawl the data which comes in HTML page on lazy load. (On scrolling the page down).

    Read the article

  • How should I handle "real time" events in an online strategy game?

    - by Hojat Taheri
    Some online strategy games have real time events. For example when you send troops to attack somewhere, the attack happens at the right time in the future. Checking the database again and again to get the list of attacks happening each second would cause heavy load. Is there any technique to achieve this goal? Another example: You want to attack a village 3 hours away, you send troops and the attack occurs 3 hours later. Should there be an script to check the database at each second to run the query at the specified time?

    Read the article

  • My ubuntu with unity not loading after last reboot

    - by Abonec
    I have asus u36sd and after last reboot I can't start up my ubuntu 11.10. Usually I suspend my notebook by closing cover but today I reboot it and it not starting up. Booting flowing by normal till to login screen but if I move mouse cursor after that image immediately switch to console (without any error; only normal loading startup processes) and back to login screen. I can type my password and boot continuing loading but after few moment it again switch back to dark console and switch again to login screen. I can load recovery mode but if I try touch my cursor (by mouse or internal notebook touchpad) it again switch back to console and to login screen. But if I use only keyboard it work fine. Where I can see detailed log information about my problem?

    Read the article

< Previous Page | 431 432 433 434 435 436 437 438 439 440 441 442  | Next Page >