Search Results

Search found 19690 results on 788 pages for 'result partitioning'.

Page 434/788 | < Previous Page | 430 431 432 433 434 435 436 437 438 439 440 441  | Next Page >

  • Virtualbox, slow upload speed using nat

    - by user1622094
    Im running Virtualbox on a Ubuntu 12.04 server (host) and I'm running a Windows 7 as guest os. Im using the (virtual) Intel PRO/1000 MT network card. I get good network performance for download using both nat and bridged network settings but upload speed is really slow using nat. I have tied this on tow different servers, one brand new, and one a several years old, both gave the same result. If you can explain this behavior or have ideas of further test I can perform please let me know.

    Read the article

  • Windows 8 / Server 2012 RDP connection is slow

    - by Chris
    I recently installed Windows Server 2012 for development purposes at our office and noticed immediately that connecting via RDP is slow. It can take 5-10 seconds to connect at times, where as connecting to any of our Win7 or Win2008R2 boxes takes at most 1-3 seconds. At first, I chalked this up to the box itself needing a driver update or something, but just yesterday, I installed Win8 on my desk PC and connecting from home to that machine produces the same result. There is a 3-4 second pause at "securing remote connection" and then again at "configuring remote session". I don't see any warnings in the event log, and once connected, there do not appear to be any performance issues. Is there a known problem with RDP connections on Windows 8 systems? Anything I should look for?

    Read the article

  • How do I cluster strings based on a relation between two strings?

    - by Tom Wijsman
    If you don't know WEKA, you can try a theoretical answer. I don't need literal code/examples... I have a huge data set of strings in which I want to cluster the strings to find the most related ones, these could as well be seen as duplicate. I already have a set of couples of string for which I know that they are duplicate to each other, so, now I want to do some data mining on those two sets. The result I'm looking for is a system that would return me the possible most relevant couples of strings for which we don't know yet that they are duplicates, I believe that I need clustering for this, which type? Note that I want to base myself on word occurrence comparison, not on interpretation or meaning. Here is an example of two string of which we know they are duplicate (in our vision on them): The weather is really cold and it is raining. It is raining and the weather is really cold. Now, the following strings also exist (most to least relevant, ignoring stop words): Is the weather really that cold today? Rainy days are awful. I see the sunshine outside. The software would return the following two strings as most relevant, which aren't known to be duplicate: The weather is really cold and it is raining. Is the weather really that cold today? Then, I would mark that as duplicate or not duplicate and it would present me with another couple. How do I go to implement this in the most efficient way that I can apply to a large data set?

    Read the article

  • Is there a formula for this?

    - by Gortron
    TL/DR: Any way to work out if known numbers between a known start and ending figure should be positive or negative numbers? I am developing an application in PHP which can import and read PDFs. The PDFs are financial ones such as bank statements with records of transactions in and out of a bank account. I only have PDFs to work with, no other formats such as CSV unfortunately. I convert the PDF to HTML using pdftohtml and start parsing the data, the intended end result is an array of transactions. So far I have it working smoothly collecting dates, descriptions and balance. Converting the XML instead doesn't help. There are other pieces of transcriptional data such as debit or credit amounts. In the PDF, the credit amount is in one column and the debit amount is in another column so it is quite clear in the PDF. However, when converted to HTML, the formatting is lost and therefor I don't know if the amount was a credit or debit amount. So, my question is, given a starting balance and an ending balance and several known figures in between, is it possible for a programme to work out if those known figures in between are credit or debit amounts? I imagine there could potentially be several combinations of those known values to reach the ending balance so I'd like to apply a formula to return the correct credit/debit sequence only if its the only possible solution. If there are several ways of adding/subtracting the known values to reach the end balance, I can ask the user to look at it manually but I'd like to keep this to a minimum if possible. Possible to do, do you think? Thank you in advance for any help.

    Read the article

  • OpenGL : sluggish performance in extracting texture from GPU

    - by Cyan
    I'm currently working on an algorithm which creates a texture within a render buffer. The operations are pretty complex, but for the GPU this is a simple task, done very quickly. The problem is that, after creating the texture, i would like to save it. This requires to extract it from GPU memory. For this operation, i'm using glGetTexImage(). It works, but the performance is sluggish. No, i mean even slower than that. For example, an 8MB texture (uncompressed) requires 3 seconds (yes, seconds) to be extracted. That's mind puzzling. I'm almost wondering if my graphic card is connected by a serial link... Well, anyway, i've looked around, and found some people complaining about the same, but no working solution so far. The most promising advise was to "extract data in the native format of the GPU". Which i've tried and tried, but failed so far. Edit : by moving the call to glGetTexImage() in a different place, the speed has been a bit improved for the most dramatic samples : looking again at the 8MB texture, it knows requires 500ms, instead of 3sec. It's better, but still much too slow. Smaller texture sizes were not affected by the change (typical timing remained into the 60-80ms range). Using glFinish() didn't help either. Note that, if i call glFinish() (without glGetTexImage), i'm getting a fixed 16ms result, whatever the texture size or complexity. It really looks like the timing for a frame at 60fps. The timing is measured for the full rendering + saving sequence. The call to glGetTexImage() alone does not really matter. That being said, it is this call which changes the performance. And yes, of course, as stated at the beginning, the texture is "created into the GPU", hence the need to save it.

    Read the article

  • Why using swap file over a SMB/NFS mounted filesystem is not possible in Linux?

    - by Avio
    I'd like to use another machine's unused RAM as swapspace for my primary Linux installation. I was just curious about performance of network ramdisks compared to local (slow) mechanical hard disks. The swapfile is on a tmpfs mountpoint and is shared through samba. However, every time I try to issue: swapon /mnt/ramswap/swapfile I get: swapon: /mnt/ramswap/swapfile: swapon failed: Invalid argument and in dmesg I read: [ 9569.806483] swapon: swapfile has holes I've tried to allocate the swapfile with dd if=/dev/zero of=swapfile bs=1024 (but also =4096 and =1048576) and with truncate -s 2G (both followed by mkswap swapfile) but the result is always the same. In this post (dated back to 2002) someone says that using a swapfile over NFS/SMB is not possible in Linux. Is this statement still valid? And if yes, what is the reason of this choice and is there any workaround to have this working?

    Read the article

  • Ubuntu 10.04/CURL: How do I fix/update the CA Bundle?

    - by Nick
    I recently upgraded our server from 8.04 to 10.04, and all the software along with it. From what I've found online, it seems that the new version of CURL doesn't include a CA bundle, and, as a result, fails to verify that the certificate of the server you're connecting to is signed by a valid authority. The actual error is: CURL error: SSL certificate problem, verify that the CA cert is OK. Details: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE: certificate verify failed Some palces I've found suggest manually specifying a CA file or disabling the check altogether by setting an option when you call CURL, but I'd much rather fix the issue globally, rather than having to modify each application's CURL calls. Is there a way to fix CURL's CA problem server-wide so that all of the existing application code works as is without needing to be modified?

    Read the article

  • XAMPP Closes the connection

    - by Miro Markarian
    I want my XAMPP Apache server to host a file (The file is around 250mb) but the server closes the connection and won't let me download the file? Does xampp or apache have any download size limit or something? Tested with a smaller file , the problem is still present. It just doesn't let me download any file from the server.!!! All I get in the error log is this: Fri Sep 07 23:21:31.742625 2012] [authz_core:debug] [pid 3664:tid 396] mod_authz_core.c(808): [client x.x.x.x:23409] AH01628: authorization result: granted (no directives), referer: http://ammiprox.tk/greeneyes2910/

    Read the article

  • Find last of match string automatically

    - by jowan
    I want to make id for entries as long as 7 digits.. while first entry is created, it will get id is 0000001 And my problem is i want to get id and add to 1 every time new entry is created.. I have a bunch of code and still confuse to implement it. $str_rep = "0000123"; $str_rep2 = "0005123"; // My character string can be like this $str_rep3 = "0009123"; // My character string can be like this $match_number= array(1,2,3,4,5,6,7,8,9); // I create array to do it automatically but it was not work. // I do it manually $get_str = strstr($str_rep, "1"); $get_str = strstr($str_rep2, "5"); $get_str = strstr($str_rep3, "9"); // Result echo $get_str . "<br>"; echo $get_str2 . "<br>"; echo $get_str3 . "<br>"; Thanks in advance

    Read the article

  • Copy files with original folder structure, but to 8.3 format

    - by kokbira
    I have a folder with a lot of files and folders inside it. I would like to copy that to another location so the result is a folder with the same file and folder structure, but with all files in 8.3 format. How to do it? PS: Well, some files have extensions with more than 3 characters (e.x. home.sh3d, windows.theme etc.), so when I say about transforming all filenames to 8.3 I would like to say about transforming them to a 8.X format (i.e., to do not change extensions).

    Read the article

  • Installing Windows from Ubuntu while booting only from the hard drive

    - by WindowsEscapist
    My problem is unrelated to this workaround (the question) here, but the end result is that I cannot change boot order (or use a boot menu) on my laptop. It is currently running Ubuntu 12.04 with a dual-boot to Fedora if anything goes catastrophically wrong with Ubuntu (read "if I mess it up"). I would really like to install Windows 7 (but XP would be fine) on an empty FAT32 partition I have already made because of issues with WINE-emulated programs running more slowly than under Windows. The problem is, I can only boot from my hard drive. I can boot from other devices by removing the hard drive, but this is irrelevant because SATA is non-hotpluggable (I can't plug it back in to install). Is there any way I could boot up a Windows installer CD (or other CDs)? (I know how to keep my Linux distros.) I have both the .iso's and the physical CDs (or can obtain them). This may be unneeded, but just as a disclaimer this is completely legal. The computer belongs to me, I have admin privs, etc. I'm not doing anything shady!

    Read the article

  • Help with URL Rewrite

    - by bodesam
    This is the first time i'm doing this and have been doing some research on it. I have a page that selects some info from a database and displays it with a link to a second page that uses the result to query the database, something like this: $sel=mysql_query("select id, title from thetable "); while($row=mysql_fetch_array($sel)) { $id=$row['id']; $title=$row['title']; echo "<a href='more.php?id=$id'>$title</a>"; } The issue is, in the more.php page, instead of more.php?id=5 to show in the address bar, I want something like more/title Secondly, as it obtains in most sites, I want the link on the referring page to show this friendly url on mouse hover not the more.php?id=5 And I notice in most sites some words like 'a', 'and', 'the' etc are usually removed from the url title(even if there originally), moreover how does one handle the situation where more than one record have the same title. How does one go about achieving this url rewrite with htaccess or whatever method is used. Thanks.

    Read the article

  • OpenXML error “file is corrupt and cannot be opened.”

    - by nmgomes
    From time to time I ear some people saying their new web application supports data export to Excel format. So far so good … but they don’t tell the all story … in fact almost all the times what is happening is they are exporting data to a Comma-Separated file or simply exporting GridView rendered HTML to an xls file. Ok … it works but it’s not something I would be proud of. So … yesterday I decided to take a look at the Office Open XML File Formats Specification (Microsoft Office 2007+ format) based on well-known technologies: ZIP and XML. I start by installing Open XML SDK 2.0 for Microsoft Office and playing with some samples. Then I decided to try it on a more complex web application and the “file is corrupt and cannot be opened.” message start happening. Google show us that many people suffer from the same and it seems there are many reasons that can trigger this message. Some are related to the process itself, others with encodings or even styling. Well, none solved my problem and I had to dig … well not that much, I simply change the output file extension to zip and extract the zip content. Then I did the same to the output file from my first sample, compare both zip contents with SourceGear DiffMerge and found that my problem was Culture related. Yes, my complex application sets the Thread.CurrentThread.CurrentCulture  to a non-English culture. For sample purposes I was simply using the ToString method to convert numbers and dates to a string representation but forgot that XML is culture invariant and thus using a decimal separator other than “.” will result in a deserialization problem. I solve the “file is corrupt and cannot be opened.” by using Convert.ToString(object, CultureInfo.InvariantCulture) method instead of the ToString method. Hope this can help someone.

    Read the article

  • C Language preprocessing doubt

    - by khanna_param
    Hi, There are different kind of macros in C language, nested macro is one of them. Considering a program with the following macro define HYPE(x,y) (SQUR(x)+SQUR(y)) define SQUR(x) (x*x) using this we can successfully complile to get the result. My question:- As we all know the C preprocessor replaces all the occurrence of the identifiers with the replacement-string. Considering the above example i would like to know how many times the C compiler traverses the program to replace the macro with the replacement values. I assume it cannot be done in one go. Thanks.

    Read the article

  • IIS + PHP + Page with lots of images = Intermittent 403 errors

    - by samJL
    I am using an up-to-date Server 2008 R2 Datacenter, running IIS 7.5 and PHP 5.3.6/FastCGI On PHP pages with lots of images (60+), some of the images fail to load It is not always the same images-- on each page refresh an image that worked previously may not load, while an image that did not now does Looking at the Net tab in Firebug reveals that the failing image requests are 403 errors All of the images are located on the server in question, and the images directory has the correct permissions I believe this problem is the result of a limit on requests All of my attempts at researching this problem point to maxConnections setting in IIS, yet mine is set at the highest/default of 4294967295 (maxBandwidth too) I am also running a ColdFusion site on the same IIS installation, and it does not suffer from 403's on pages with lots of images I am left thinking that there is another connection limit (in PHP or FastCGI?) overriding the IIS connection limit I don't see anything that looks like a request limit in the php.ini, what am I missing? Any help would be appreciated, thank you

    Read the article

  • How to setup a simple Ubuntu Server Tomcat cluster on VirtualBox for testing?

    - by Alex Pakka
    I am looking for a step by step instructions to setup at leat two (and later more) simple Ubuntu Virtual Core 12.10 Server VMs on Oracle VirtualBox under Windows 7 64bit. The test setup would be: Apache HTTP server on the Windows host acting as a Load Balancer. The result will be that going to http://localhost:8080 would balance between two nodes and prove session replication. Two lean, small footprint Ubuntu Server guest nodes with Java 7 and Tomcat 7. The intention is to help everyone doing High Availability / Load Balancing development and testing to create a reasonable environment on the local workstation or mainstream notebook in as little time as possible.

    Read the article

  • How do I put logical operators in an Excel =IF Formula?

    - by Brian Hooper
    I'm trying to enter a formula to display text according to an IF condition. The best I can manage is something like... =IF(myval>=minval & myval <= maxval, "OK", "Not OK") But this appears to work exactly wrongly, displaying OK when myval is out of range and Not OK when it is in range. How do I specify the logical AND correctly? I have tried && as I have seen in questions here, and inner brackets, but these result in errors.

    Read the article

  • Improve efficiency when using parallel to read from compressed stream

    - by Yoga
    Is another question extended from the previous one [1] I have a compressed file and stream them to feed into a python program, e.g. bzcat data.bz2 | parallel --no-notice -j16 --pipe python parse.py > result.txt The parse.py can read from stdin continusuoly and print to stdout My ec2 instance is 16 cores but from the top command it is showing 3 to 4 load average only. From the ps, I am seeing a lot of stuffs like.. sh -c 'dd bs=1 count=1 of=/tmp/7D_YxccfY7.chr 2>/dev/null'; I know I can improve using the -a in.txtto improve performance, but with my case I am streaming from bz2 (I cannot exact it since I don't have enought disk space) How to improve the efficiency for my case? [1] Gnu parallel not utilizing all the CPU

    Read the article

  • Intonation issues in office 2007 and internet explorer

    - by Souvlaki
    We were brought a laptop with Windows 7 Home Premium setup for greek language speakers. The installed languages and keyboards are: English (US), as default, and Greek. There is also installed Microsoft Office 2007 greek and Internet Explorer 9.0.8112.16421 greek. When the user tries to write intonated letters such as "?, ?" in office or the IE, instead of the correct letter the result is: ``a and not ? Do you need any other information on the system or what are the suggestions to search for the cause of this problem?

    Read the article

  • Dividing with Gnu's bc

    - by Boldewyn
    I'm just starting with Gnu's bc and I'm stuck at the very beginning (very discouraging...). I want to divide two numbers and get a float as result: $bc bc 1.06.94 Copyright 1991-1994, 1997, 1998, 2000, 2004, 2006 Free Software Foundation, Inc. This is free software with ABSOLUTELY NO WARRANTY. For details type `warranty'. 15/12 1 15.0/12.0 1 15.000000/12.000000 1 scale(15.00000) 5 The man page says, that division returns a number with the same scale as the initial values. Obviously this is either not true or I'm missing something. Googling hasn't brought up any new insights (besides that 'BC' can also stand for 'British Columbia'). Do you see my error? Better yet, do you know any good references/tutorials to bc?

    Read the article

  • Context switches much slower in new linux kernels

    - by Michael Goldshteyn
    We are looking to upgrade the OS on our servers from Ubuntu 10.04 LTS to Ubuntu 12.04 LTS. Unfortunately, it seems that the latency to run a thread that has become runnable has significantly increased from the 2.6 kernel to the 3.2 kernel. In fact the latency numbers we are getting are hard to believe. Let me be more specific about the test. We have a program that has two threads. The first thread gets the current time (in ticks using RDTSC) and then signals a condition variable once a second. The second thread waits on the condition variable and wakes up when it is signaled. It then gets the current time (in ticks using RDTSC). The difference between the time in the second thread and the time in the first thread is computed and displayed on the console. After this the second thread waits on the condition variable once more. So, we get a thread to thread signaling latency measurement once a second as a result. In linux 2.6.32, this latency is somewhere on the order of 2.8-3.5 us, which is reasonable. In linux 3.2.0, this latency is somewhere on the order of 40-100 us. I have excluded any differences in hardware between the two host hosts. They run on identical hardware (dual socket X5687 {Westmere-EP} processors running at 3.6 GHz with hyperthreading, speedstep and all C states turned off). We are changing the affinity to run both threads on physical cores of the same socket (i.e., the first thread is run on Core 0 and the second thread is run on Core 1), so there is no bouncing of threads on cores or bouncing/communication between sockets. The only difference between the two hosts is that one is running Ubuntu 10.04 LTS with kernel 2.6.32-28 (the fast context switch box) and the other is running the latest Ubuntu 12.04 LTS with kernel 3.2.0-23 (the slow context switch box). Have there been any changes in the kernel that could account for this ridiculous slow down in how long it takes for a thread to be scheduled to run?

    Read the article

  • Sysprep.exe completely missing on both of my Windows 7 64bit machines. How should I find a workaround?

    - by Zoltán Tamási
    The sysprep.exe file is simply missing on my Windows 7 64bit machine. I tried to find it on another computer, but it wasn't found there either. I can't understand it, because on a lot of forums and even in the official articles there are a lot of references to this tool. I've checked system, system32, sysWOW64 folders, and even made a full search with Total Commander. I only found a sysprep folder in the system32 folder, but inside was only an en-US subfolder, which was empty. Then I thought I will give my Windows PE bootdisk a try, which I've created a while ago. No result, only the same empty en-US folder is present there as well. Please if anyone knows what's happening, point me to the right direction. I need to clone my system and I'm stuck right at the first step...

    Read the article

  • When opening any file in excel, a 1 is added to ther name, and the default is to save a new copy…

    - by Chris
    Ok... I've searched a lot for this, but it's not an easy question to search for! When I open any files (xls, or xlsx) in Excel 2007, excel acts like it's a read only file, essentially creating a new file with the name plus a 1 on the end... Eg. I open NewDoc.xlsx Excel opens it as NewDoc1.xlsx and the save button brings up the save as dialogue in my default folder. Does anyone know how to set it back to allowing me to open, edit and save a document without having to browse to the original document and save over it!? My immediate thought was access permissions, but the file is in a network folder with my user given Full Control, I also tried creating a new file in that folder, and also on my local machine just in case - same result. To make it even stranger, if I browse to the original file using the save as dialogue, it will let me save over the original, without any further prompts.

    Read the article

  • Draw "vision cone" / targetting element onto game world

    - by gkimsey
    I'm wanting to indicate various things using a "pie slice" sort of shape as below. Similar to vision cones in stealth game minimaps, or targetting indicators in RTS type games for frontal area attacks. Something generic enough to be used for both would be ideal. I need to be able to procedurally (and efficiently) change things like the slice width and length, color, transparency, position in the world, etc. For my particular situation, there's no concern with elevation, funky terrain, or really any third axis at all as far as this element is concerned. I have two first inclinations on how to accomplish this: 1) Manually generate the vertices for a main triangle, (possibly two, superimposed to get the border effect), a handful more to approximate the arc at the end, and roll it into a mesh. 2) Use some sort of 2D drawing library to create a circle and mask it off at the right angles, render to texture, and use that. For reference, I have some experience with Ogre3D, but I'm not attached to it as this is a mostly academic pursuit at the moment. Other technologies that might be better at accomplishing this are more than welcome. Finally, I'm kind of curious about how to do a "flashlight" or similar 3D effect that could produce the same result, but on all surfaces in the lit area.

    Read the article

  • Issue updating domain name servers from BlueHost to AWS

    - by cowls
    I am trying to migrate my site hosting from bluehost to AWS cloud based service. I have the site up and running on AWS with an elastic IP configured, it loads fine when I specify the IP address in the browser. I have gone into Route 53 on the AWS console and created a "hosted zone" for the domain. I then created a new record set of type "A" using the IP address as the value. I have a domain name registered with bluehost. Ive logged into the bluehost account and updated the domain name servers to point to those specified in Route 53 in the AWS console. When I hit the IP address directly the site loads, however it doesn't load when using the domain name (I get a google chrome oops error page saying page is not found) I've tried using this site: http://dns.squish.net/ to debug but it seems to be giving me the correct results. fizaclegems.com 300 IN A 107.20.209.78 Where 107.20.209.78 matches the elastic IP configured in the AWS console. This is the result it gives for all 4 name servers. Am I missing a step here? Does anyone know what else I should be doing or looking for?

    Read the article

< Previous Page | 430 431 432 433 434 435 436 437 438 439 440 441  | Next Page >