Search Results

Search found 20313 results on 813 pages for 'batch size'.

Page 528/813 | < Previous Page | 524 525 526 527 528 529 530 531 532 533 534 535  | Next Page >

  • Terrible Performance with SATA Drives on Dell PowerEdge, steps to troubleshoot?

    - by Tom
    I had asked this question earlier and the question went missing so here it is again. Bought a DELL Poweredge 2950 to use as in-house QA Server. Disk performance is beyond terrible, 1000-4000 ms response time on the drive with our SQL Server database .mdf. Sql Server disk queue upwards of 300 at times. I'm a software guy, can anyone help me with steps to determine the issue? I don't know what RAID controller it has, how can I determine that? I'm speculating it could be BIOS issue. Perhaps the server used to have another kind of drive in it and when I added SATA the ??? buffer size is wrong??? Perhaps I chose wrong options (chose defaults) when setting up the RAID 1 arrays? I thought RAID 1 was a performance array?

    Read the article

  • Site-to-site VPN

    - by ronadona
    We are a small business company that is based in Sydney and opened a new office in London. Number of employees in Sydney office is 25 and in London is 6 employees. So the traffic isn't that high. Files to be transferred are Excel sheets with size of 15mb max. Both locations have MS server 2008 and Fortigate gateways. I set up a site to site vpn but it's extremely slow. Maybe this is because our upload speeds is 1Mbps only but We will increase the upload speed to 20 Mbps in both locations but I am afraid that this will not solve the problem as the 2 locations are far from each other and the upload upgrade won't solve the problem. what's the best way to go? Shall we find a provider for the VPN? or is there another technology that can be used through internet without paying extra costs? Many thanks!

    Read the article

  • Vantec NexStar NAS Encloser - Writing large files

    - by peter
    I have one of these 'Vantec NexStar LX - NST-475LX-BK' drive enclosures. It is a NAS device. When I write a file to the device using eSata, or a SMB share I cannot write files over 4GB. I think this is because the drive is formatted with FAT32. But when I access the device using FTP it doesn't matter. I can write files of any size. E.g. I wrote one on there last night which was 30GB. Does this make any sense? Why? I guess the most important thing for me is data integrity.

    Read the article

  • Big level objects collision system for 2d game

    - by Aristarhys
    I read many variants today and get some knowledge in general, so here is a steps of mine thoughts in pictures (horrible paint.net ones). We need to develop grid system, so we check only thing near, perform simple check to cut out deep check, and at - last deep check like per-pixel collision check. Step 1 - Let p1, p2 are some sprites lets first just check with circle collision - because large distance between p1, p2 this fails and of course so we don't need test more deeply. But if we have not 2, but 20 objects, why we need to even circle test something so far outside of our view. Step 2 - Add basic column system, now we don't bother with p2 if it's in a column far from p1 column, so we even don't do circle test. But p3 is in the same col, so let do circle test, which of course will fail. Step 3 - Lets improve column system to the grid system with grid cell size just like p1, p2, p3 collision boxes, so we cut out things much top or below p1. And this is all great until comes BIG OBJs which is some kind of platforms. They are much bigger then grid cell. Circle test for will be successful, but deep check for whole big obj will fail And that the part I can't get. How do I store the grid position of big object? Like 4 grid coords for big object vertexes? And if one of them close to p1 do circle check for centre of big object then a deep one if succeed? Am I do it wrong? My possible solution:

    Read the article

  • Ubuntu 12.04.1 LTS and Nvidia dirver (304.51) 64bit: problem 640x480

    - by nibianaswen
    I have a problem with this configuration: Asus K55V, Ubuntu 12.04 LTS and Nvidia driver 304.51. I have remove the nouveau driver with: apt-get --purge remove xserver-xorg-video-nouveau I installed the official nvidia driver (from www.nvidia.com) but when I reboot the PC the resolution of screen is only 640x480 and the monitor is resized. Mo solution at this problem if i change the xorg.conf. Now i have uninstall the nvidia driver and reinstall with sudo apt-get purge nvidia-current sudo apt-add-repository ppa:ubuntu-x-swat/x-updates sudo apt-get update sudo apt-get install nvidia-current When I reboot the screen resolution and size is OK, but if I start nvidia-setting I received the message: You do not appear to be using the NVIDIA X driver. and with command: sudo lshw -c display | grep driver I received configuration: driver=i915 latency=0 This sound like the system is using the Intel card. When I launch command lspci | grep VGA the output is: 00:02.0 VGA compatible controller: Intel Corporation Ivy Bridge Graphics Controller (rev 09) 01:00.0 VGA compatible controller: NVIDIA Corporation Device 1058 (rev ff) And there is no /etc/X11/xorg.conf. I have read a lot of guides on internet but without success.. How i can use nvidia card with the driver that i have installed?

    Read the article

  • #altnetseattle - Kanban

    - by GeekAgilistMercenary
    The two main concepts of Kanban is to keep the queues minimum and to maintain visibility. Management/leadership needs to make sure the Kanban Queue doesn’t get starved.  This is key and also very challenging, being the queue needs to be minimal but also can’t get too small during the course of work.  This is to maintain maximum velocity. Phases of the Kanban need to be kept flowing too, bottlenecks need removed ASAP when brought up. Victory Wall – I dig that idea.  Somewhere to look to see the success of the team. The POs work in Rally or other tools for some client management, but it causes issues with the lack of "visibility" – a key fundamental ideal & part of Kanban. One of the big issues is fitting things into a sprint, when Kanban is used with Scrum, but longer sprints are wasteful. Kanban work sizes are of a set size. At this point I got a bit side tracked by the actual conversation and missed out on note taking.  Overall, people doing Kanban and Lean Style Software Development I would say are some of the happiest coders around.  The clean focus, good velocity, sizing, and other approaches that are inferred by Kanban help developers be the rock stars and succeed. This is definitely a topic I will be commenting on a lot more in the near future.

    Read the article

  • Tweaks to make Cleartype better at high resolutions?

    - by ULTRA_POROV
    Cleartype is great when displaying small text (say 10-16px). However when you display something above 20px it starts looking like mud. Just compare it to Photoshop. Photoshop rendering at small size is not very impressive, too blurry. But if you compare it at 20px, Photoshop wins all the time. Cleartype looks jaggy around the edges, almost like there is no Cleartype at all. Can this be fixed, or is it just the way Cleartype is?

    Read the article

  • What software allows editing text with furigana professionally?

    - by Julian
    I'm studying Japanese and need to write a lot of text with furigana. I've been using Word so far but my main concern is that entering furigana is not only quite clumsy (no hotkey) but what's more important is that once entered, you can't globally change either its font or its size; you need to change them one by one. This is a deal-breaker for me since my average text contains hundreds of entries. There is a hack you can do as pointed out by another guy on SU but I found that by using it I could (and did) break my document easily. My question is: is there a software that is specifically designed to work with Japanese text that also has its UI in English? As stated above, I need something that has furigana editing as a first-class citizen.

    Read the article

  • What does dd conv=sync,noerror do?

    - by dding
    So what is the case when adding conv=sync,noerror makes a difference when backing up an entire hard disk onto an image file? Is conv=sync,noerror a requirement when doing forensic stuff? If so, why is it the case with reference to linux fedora? Edit: OK, so if I do dd without conv=sync,noerror, and dd encounters read error when reading the block (let's size 100M), does dd just skip 100M block and reads the next block without writing something (dd conv=sync,noerror writes zeros to 100M of output - so what about this case?)? And if is hash of original hard disk and output file different if done without conv=sync,noerror? Or is this only when read error occurred?

    Read the article

  • How to Extract the images from the doc file without Having Microsoft office ?

    - by Anirudha
    Originally posted on: http://geekswithblogs.net/anirugu/archive/2013/06/29/how-to-extract-the-images-from-the-doc-file-without.aspxMany time we got the doc file who have some images. We need to try to extract them in Microsoft word which come with Windows 7 (not Microsoft office word). Looking to this article http://www.techrepublic.com/blog/itdojo/save-images-in-microsoft-word-documents-as-separate-files/135 This article is only useful when you have Microsoft word installed. Now if I don’t have Microsoft office then what ? No problem, here is a trick. when you open the doc file in Word  then select the image and right click on image and choose cut. open the Microsoft paint. paste them here. Without clicking anywhere click on crop icon on toolbar.   Now you got your image in the same size as you have in word file. Don’t worry about Image format. Microsoft paint have support for save them in PNG format.

    Read the article

  • PostgreSQL disaster recovery options

    - by Alex
    My customer has quite a large (the total "data" folder size is 200G) PostgreSQL database and we are working on a disaster recovery plan. We have identified three different types of disasters so far: hardware outage, too much load and unintentional data loss due to erroneously executed bad migration (like DELETE or ALTER TABLE DROP COLUMN). First two types seem to be easy to mitigate but we can't elaborate a good mitigation plan for the third type. I proposed to use ZFS and frequent (hourly) snapshots but "ZFS" means "OpenIndiana" these days and our Ops engineers do not have much expertise in it, so using OpenIndiana imposes another risk. Colleagues try to convince me that restoring from PostgreSQL PITR backup can be as fast as restoring from a ZFS snapshot but I highly doubt that replaying, say, 50G of archived WALs can be considered "fast". What other options are we missing? Is ZFS an only viable alternative? Can we get a fast Pg DB restore time in the Linux environment?

    Read the article

  • Virtualbox fullscreen mode problem

    - by AtharvaI
    hi all I have a slightly weird issue with virtualbox. My host OS is Win7 (64-bit) and guest OS is Ubuntu 10.10(64-bit). When I switch to fullscreen mode in virtualbox, ubuntu display resizes to fit my screen size. however, after that the display is not updated. So if i click a menu or something I don't see it appear. but it seems to work in the background. if i click a menu in fullscreen mode, i don't see anything happen, but if then switch to windowed mode I see the menu already open. I have installed virtuabox guest additions. if any has a similar issue or has found a solution please let me know thanks.

    Read the article

  • MPlayer refuses to generate mono wav file

    - by JCCyC
    I want to downsample an existing audio file to 8KHz mono. This command line downsamples it to stereo: mplayer -quiet -vo null -vc dummy -af volume=0,resample=8000:0:1 -ao pcm:waveheader:file="/tmp/blah1.wav" ~/from_my_cellphone.3ga It generates a file that the file utility identifies as stereo: $ file /tmp/blah1.wav /tmp/blah1.wav: RIFF (little-endian) data, WAVE audio, Microsoft PCM, 16 bit, stereo 8000 Hz Now, if I read the documentation correctly, I should add pan=1:0.5:0.5 so I get a file that's half the size: mplayer -quiet -vo null -vc dummy -af volume=0,resample=8000:0:1:pan=1:0.5:0.5 -ao pcm:waveheader:file="/tmp/blah2.wav" ~/from_my_cellphone.3ga But it doesn't! blah2.wav is identical to blah1.wav! What am I doing wrong?

    Read the article

  • Function calls to calls in windows api

    - by Apeee
    I am a beginner, and learning C, I find it hard to grasp the whole programming concept. so hopefully this would help to clear up some things along the way. When programming in windows, which is my aim for the time being, it is really hard for me to understand how windows communicate with the programs that run on it. A question i have been pondering about is how when you incorporate a function call which is in another memory location on the disk or memory(not a function you yourself wrote and is included in the compilation), especially the windows API, does the compiler know where the function location is so when the program is run it can call that function? For example, a very simple program that displays a window which reads hello world. You would have to call windows API functions to achieve such features as creating the window, its size, colors and so on... So basically what I am struggling to grasp is how the programs I write communicate with the platform, framework they are run on(generally windows for Windows API). Apart from clarification on this one above, i would love a resource that explains this concept further. Thanks for your time!

    Read the article

  • Mirroring MySQL server with diffrent configuration

    - by HTF
    I have to migrate MySQL server to a different data centre so I would like to create another MySQL slave server in new DC and then promote it to a master later on. I previously used LVM snapshots and Percona Xtrabackup for this purpose but this time I've optimized MySQL configuration file that prevents me from using these methods. Old server (backup): innodb_log_file_size = 256M innodb_log_files_in_group = 3 New server (restore): innodb_log_file_size = 512M innodb_log_files_in_group = 2 The Xtrabackup script and LVM snapshots copy the whole directory structure so the MySQL server won't start because there is a different size for InnoDB logs. Is there any solution to avoid a downtime in this case? I can't use mysqldumps as there is around 8000 databases so I would have to take the server down for a couple of hours. I was also thinking to use the old settings with Xtrabackup and then change it once the new server is promoted to a master - less downtime but I'm not sure if this will work? Thank you Regards

    Read the article

  • uploading large files (mp4) to IIS 7.5 gives 500 Internal Server Error

    - by dragon112
    I made a website on which i need to be able to upload video files and it has worked for quite a while. However after a while it just stopped working and now it will give me the following IIS error message when i upload a video. Images do work (possibly due to their smaller size). I use an html form with PHP server sided script to upload. I have already set the user permissions for the entire inetpub to allow all actions for the IIS user. If you have any idea what it could be PLEASE tell me, have been trying to fix this for weeks now. Thanks in advance!

    Read the article

  • How do large companies handle software updates for users without administrative rights?

    - by CT
    I just started working for a small-medium size company doing IT support. Maybe 150 or less users. Right now every user has administrative rights to their own machine. This allows them to install updates or whatever else they would like to. I'm tired of getting on user's machines that are bloated with crap they put on themselves. So my first thought would be to take away administrative rights to their computer. This would also have other advantages such as preventing a lot of drive-by malware on the web etc. The problem arises that users are unable to install updates. (Even though I find most ignore these anyway) How do large companies handle software updates on all client machines? EDIT: Windows environment. Most servers are Windows Server 2003 Enterprise. Clients are all Windows. Win XP, Vista, and 7.

    Read the article

  • Adobe Illustrator - Objects are Expanded on Restart

    - by Shatou Dev
    I use Adobe Illustrator extensively for mobile development. For example, for Android app, I have many illustrations with different densities and sizes. I use many effects such as outer glow, round corners and such. However, sometimes, when I close the .ai file and re-open it later, I found all my objects expanded. Effects are expanded. For example, the round corner effect is now expanded and I cannot re-size any object. I am using a MacBook Pro. I am suspecting both Time Machine and Dropbox. As much as I can recall, it also appears with CS6 version, not CS5.

    Read the article

  • linux: accessing thousands of files in hash of directories

    - by 130490868091234
    I would like to know what is the most efficient way of concurrently accessing thousands of files of a similar size in a modern Linux cluster of computers. I am carrying an indexing operation in each of these files, so the 4 index files, about 5-10x smaller than the data file, are produced next to the file to index. Right now I am using a hierarchy of directories from ./00/00/00 to ./99/99/99 and I place 1 file at the end of each directory, like ./00/00/00/file000000.ext to ./00/00/00/file999999.ext. It seems to work better than having thousands of files in the same directory but I would like to know if there is a better way of laying out the files to improve access.

    Read the article

  • subversion instillation on centos 5.8

    - by user57221
    I am trying to install subversion on centos 5.8 usingyum install subversion and it is throwing the error below. ..... .... Total size: 7.3 M Is this ok [y/N]: y Downloading Packages: Running rpm_check_debug ERROR with rpm_check_debug vs depsolve: libapr-1.so.0()(64bit) is needed by subversion-1.6.11-10.el5_8.x86_64 libaprutil-1.so.0()(64bit) is needed by subversion-1.6.11-10.el5_8.x86_64 libapr-1.so.0()(64bit) is needed by (installed) mod_perl-2.0.4-6.el5.x86_64 apr is needed by (installed) httpd-2.2.22-12051516.x86_64 /usr/lib64/libapr-1.so.0 is needed by (installed) httpd-2.2.22-12051516.x86_64 libaprutil-1.so.0()(64bit) is needed by (installed) mod_perl-2.0.4-6.el5.x86_64 apr-util is needed by (installed) httpd-2.2.22-12051516.x86_64 /usr/lib64/libaprutil-1.so.0 is needed by (installed) httpd-2.2.22-12051516.x86_64 Complete! (1, [u'Please report this error in http://bugs.centos.org/yum5bug']) How do i resolve this?

    Read the article

  • Problems syncing photos and strange effects of uploaded files from other devices

    - by Daniel
    I have a Galaxy Spica (GT-i5700) Android v2.1, rooted with Leshak dev 7 #123. But never mind the root info, the problem would be the same unrooted. The photos from this phone is stored in "sdcard/images", nevertheless the phone also creates a "sdcard/DCIM" but only stores some thumbnails there. Problem nr 1: U1 only reads the DCIM-folder for automatic photo-upload. So photos stored in this phone is not uploaded. If I move photos to "DCIM" folder, U1 recognises the photos and start uploading them. Possible solution: Could there be an option in the settings, to set preferred photo folder? Problem nr 2: Out of 74 pictures, 12 did not get uploaded. Pressing "Retry failed transfers" in Settings does nothing. Pressing the files where status is "Upload failed, tap to retry" only changes the status to "Uploading..." but nothing gets uploaded. If I upload another file to U1, it is uploaded directly without any problem. It has nothing to do with file size, 1,1 MB files has been uploaded fine whilst some failed are 0,8 MB. Problem nr 3: The photos from DCIM are in my case uploaded to a folder called "Pictures - GT-I5700" in U1. If I log in to the homepage and from there upload another photo in "Pictures - GT-I5700", it shows up in U1 on my phone fine. But when I tap it, U1 downloads the photo to "sdcard/U1/Pictures - GT-I5700". If it sync photos from "sdcard/DCIM" to a specific folder, why not also download files to the same folder from which it is synced? After a while of usage, syncing and uploading files from different clients it would be a mishmash of folders and places files are stored and considering that I see no use of U1 at all. Another question: If my SD card in some way breaks down/some folders cannot be read/card temporarly changed and U1 is running, does U1 consider that as files deleted and also delete from the cloud?

    Read the article

  • How much RAM 64bit Windows 8 reserves to OS internal use?

    - by Barleyman
    Windows reserves some memory for it's internal use which is not normally allocated to applications. This reserve is seen most easily if you run without a page file or limit the pagefile to relatively small size (such as 3GB). Windows will allocate primarily RAM up to the limit, fill up remaining free space in the page file (if any) and issue a low memory warning when there is no page file space left and the allocated RAM limit is exceeded. The limit appears to be a percentage of the total system RAM. Windows 7 x64 limit is discussed here and methods for circumventing the "low memory warning" is discussed here. Disabling the low memory warning has some advantages - You can use some 600MB more RAM on 8GB machine) But there is a serious disadvantage - When you're out of ram, programs will crash. How much RAM can you allocate on 8GB Windows 8 x64 before you get the low memory warning? Is it possible to adjust the warning threshold?

    Read the article

  • Is Oracle Database Appliance (ODA) A Best Kept Secret?

    - by Ravi.Sharma
    There is something about Oracle Database Appliance that underscores the tremendous value customers see in the product. Repeat purchases. When you buy “one” of something and come back to buy another, it confirms that the product met your expectations, you found good value in it, and perhaps you will continue to use it. But when you buy “one” and come back to buy many more on your very next purchase, it tells something else. It tells that you truly believe that you have found the best value out there. That you are convinced! That you are sold on the great idea and have discovered a product that far exceeds your expectations and delivers tremendous value! Many Oracle Database Appliance customers are such larger-volume-repeat-buyers. It is no surprise, that the product has a deeper penetration in many accounts where a customer made an initial purchase. The value proposition of Oracle Database Appliance is undeniably strong and extremely compelling. This is especially true for customers who are simply upgrading or “refreshing” their hardware (and reusing software licenses). For them, the ability to acquire world class, highly available database hardware along with leading edge management software and all of the automation is absolutely a steal. One customer DBA recently said, “Oracle Database Appliance is the best investment our company has ever made”. Such extreme statements do not come out of thin air. You have to experience it to believe it. Oracle Database Appliance is a low cost product. Not many sales managers may be knocking on your doors to sell it. But the great value it delivers to small and mid-size businesses and database implementations should not be underestimated. 

    Read the article

  • How SSD hard drive affected speed of your website (asp.net/linq/ms sql database)

    - by Sergey Osypchuk
    I have a small database (<1G) But we have a lot of complex logi? in website and client complains on render time, which is 3-5 seconds. We are not google, and thousands of users a day is our dream, so size is not a problem, but speed is important. Can anybody share with experience with SSD drives for ASP.NET (MVC)/LINQ/MS SQL based application ? How you performance increased? UPDATE: this whitepaper states that it will be 20 times faster. http://www.texmemsys.com/files/f000174.pdf

    Read the article

  • Nvidia 9 series or Intel HD 2000? [closed]

    - by EApubs
    I just tested an Nvidia 9300 GS card with a Intel Corei3 HD 2000 graphics system. Here is the windows experience index scores I got : Nvidia 9300 GS : Base Score 3.9 Processor : 7.1 Memory : 7.5 Graphics : 3.9 Gaming Graphics : 5.1 Hard Disk : 5.9 Intel HD 2000 : Base Score : 5.2 Processor : 7.1 Memory : 5.9 Graphics : 5.2 Gaming Graphics : 5.8 Hard Disk : 5.9 My questions are : When using Intel HD graphics, it reduces the score of my Ram! How is that possible? It checks the speed of the ram. Not the size (i think). Intel graphics take some of the ram space but how can that effect the speed? From both of them, what will be the good choice?

    Read the article

< Previous Page | 524 525 526 527 528 529 530 531 532 533 534 535  | Next Page >