Search Results

Search found 24931 results on 998 pages for 'information visualization'.

Page 563/998 | < Previous Page | 559 560 561 562 563 564 565 566 567 568 569 570  | Next Page >

  • Why is access to my database very slow?

    - by Fabien
    I have a mysql database that used to work perfectly fine, but now it is dead slow on startup. When I type in $> mysql -u foo bar I get the following usual message for about 30 seconds before I get a prompt : Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A Of course, I tried it and it goes a lot faster : $> mysql -u foo bar -A But why do I have to wait so long in regular startup ? This is not a very big database, and data does not seem to be corrupted (everything looks fine after startup). I have no other client connecting to the mysql server at the same time (only one process is shown with the command show full processlist) and I have already restarted the mysqld service. What's going on ?

    Read the article

  • Password Expired on Server 2008 R2

    - by Shaharyar
    Hello everybody, We're facing some trouble with our Windows Server 2008 R2 installation. The passwords expired and we're prompted to change the password. After changing it, we get following error message: Configuration Information could no be read from the domain controller, either because the machine is unavailable, or access has been denied. But we aren't even using a domain controller. So we tried running the server in Safe Mode where we get following message after changing the password: An attempt was made to logon, but the network logon service was not started. Are there any other things I could try? All help is more than appreciated!

    Read the article

  • SEO with duplicate content

    - by user16831
    I have a nature photography site with multiple types of photo galleries. Each photo and associated caption on my site appears in several galleries. For instance, a photo of a goldfinch that was taken on a trip to New Mexico in 2008 will appear in the "goldfinch.php" gallery, in the "finches.php" gallery, and in the "New_Mexico_2008.php" gallery. This duplication is useful for my site visitors - User A may want to see goldfinch photos, whereas User B wants to see photos from New Mexico - but I am concerned about the SEO implications. The typical suggestions to deal with duplicate content, such as 301 redirects and canonical tags, probably won't work in this case, because the page content is substantially different (ranging from ~1% to ~90% duplication, depending on the specific example chosen). The obvious solution to me would be to edit robots.txt to only allow search engines to crawl one type of gallery - for instance, if they crawled only the galleries organized by species(e.g. goldfinch.php), all the photos on my site would be found exactly once. However, the Google content guidelines recommend against blocking crawler access to duplicate information. Should I go ahead and use robots.txt anyway? Or is there a better solution?

    Read the article

  • Accessing second hard drive

    - by Jonathan
    So I recently installed Ubuntu 10.10 64-bit on my computer. I installed it on my 60gb SSD hard drive, and in the installation it never acknowledged the existence of my second hard drive. The hard drive that I keep all my files on, and which I want to make my home folder if I can, is a Western Digital Caviar Black 1TB SATA 6Gb/s 64MB cache (WD1002FAEX). I've read the following: https://help.ubuntu.com/community/Mount but honestly cannot work out how to access the hard drive from my Ubuntu installation. I did have Windows 7 64-bit prior to installing Ubuntu. I have backed up all the files on the hard drive, but if I could just access them straight off that would be super cool. Does anyone know how I can use the second hard drive? Thank you for your help EDIT: The following directories are currently in my /dev/ folder: ati/, block/, bsg/, bus/, char/, cpu/, isk/, input/, mapper/, net/, pktcdvd/, pts/, shm/, snd/, and usb/ EDIT: Result from sudo fdisk -l Disk /dev/sda: 60.0 GB, 60022480896 bytes 255 heads, 63 sectors/track, 7297 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000d2dfd Device Boot Start End Blocks Id System /dev/sda1 * 1 6994 56174592 83 Linux /dev/sda2 6994 7298 2438145 5 Extended /dev/sda5 6994 7298 2438144 82 Linux swap / Solaris @djeykib So very close to fixing it.. unfortunately on the last command you gave it says this: $ sudo apt-get install linux-lts-backport-natty Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package linux-lts-backport-natty Checking on http://www.ubuntuupdates.org/ppas reveals that it is only available for 10.04. Looks like I'll have to unplug and re-plug hardware if I want it working still :(

    Read the article

  • How do I drag my widgets without dragging other widgets?

    - by Cypher
    I have a bunch of drag-able widgets on screen. When I am dragging one of the widgets around, if I drag the mouse over another widget, that widget then gets "snagged" and is also dragged around. While this is kind of a neat thing and I can think of a few game ideas based on that alone, that was not intended. :-P Background Info I have a Widget class that is the basis for my user interface controls. It has a bunch of properties that define it's size, position, image information, etc. It also defines some events, OnMouseOver, OnMouseOut, OnMouseClick, etc. All of the event handler functions are virtual, so that child objects can override them and make use of their implementation without duplicating code. Widgets are not aware of each other. They cannot tell each other, "Hey, I'm dragging so bugger off!" Source Code Here's where the widget gets updated (every frame): public virtual void Update( MouseComponent mouse, KeyboardComponent keyboard ) { // update position if the widget is being dragged if ( this.IsDragging ) { this.Left -= (int)( mouse.LastPosition.X - mouse.Position.X ); this.Top -= (int)( mouse.LastPosition.Y - mouse.Position.Y ); } ... // define and throw other events if ( !this.WasMouseOver && this.IsMouseOver && mouse.IsButtonDown( MouseButton.Left ) ) { this.IsMouseDown = true; this.MouseDown( mouse, new EventArgs() ); } ... // define and throw other events } And here's the OnMouseDown event where the IsDraggable property gets set: public virtual void OnMouseDown( object sender, EventArgs args ) { if ( this.IsDraggable ) { this.IsDragging = true; } } Problem Looking at the source code, it's obvious why this is happening. The OnMouseDown event gets fired whenever the mouse is hovered over the Widget and when the left mouse button is "down" (but not necessarily in that order!). That means that even if I hold the mouse down somewhere else on screen, and simply move it over anything that IsDraggable, it will "hook" onto the mouse and go for a ride. So, now that it's obvious that I'm Doing It Wrong™, how do I do this correctly?

    Read the article

  • Cron prepending filename to script output

    - by Caitifty
    I'm having an issue with unwanted lines being added to files output by a cron job. I have a script in /etc/cron.hourly which selects some data from a mysql database and saves it in a text file in /var/www. When I run the script as root, it does exactly what I expect it to do. When the script is executed by cron, it creates the same file, but prepends the following three lines at the top of the output file: :::::::::::::: /var/www/outputfilename :::::::::::::: I can't for the life of me work out how to stop this unwanted behavior. The line in /etc/crontab for cron.hourly is the default "44 * * * * root cd / && run-parts --report /etc/cron.hourly". If I use su to change to being root and do "cd / && run-parts --report /etc/cron.hourly" the script runs as expected and the output doesn't have the mysterious additional 3 lines. I've also tried removing the --report flag from the run-parts command in case that was somehow connected, but no joy. Finally, perusing the cron log output in /var/log/syslog just says cron.hourly ran without giving any additional information. Any suggestions on solving this weird problem most welcome..

    Read the article

  • I'm applying for a position at a startup. To whom should I address my cover letter?

    - by sapphiremirage
    One of the co-founders answered questions about the company when the job was posted, but I feel like I shouldn't assume that he's the one who is in charge of hiring. Since it's relatively new and has a lot of name overlap with other things already on the web, it's hard to find any information about the company online, much less the name of their hiring manager. I'm not even certain that they do have a hiring manager, since I seem to remember that they are just an 8 person team. I've heard that "To whom it may concern" is tacky, and normally I would say something along the lines of "Dear Head of Human Resources", but that clearly doesn't work in this case. Any idea what my salutation should be? Later Edits: Final Version: To Joe Programmer and/or the AwesomeStartup.com hiring team, (+ a few words in first paragraph explaining why I am addressing Joe Programmer) I've already sent the email, so nothing you say here will save me. However, feel free to comment on my decision if you think your words be useful to future generations. Old Version (left here because some people responded to it): To the hiring manager for internships at Awesomestartup.com, Additionally, because so many people made comments about the content of my letter: I did spent several hours writing the cover letter itself and making sure that it was awesome. After spending such a long time working on the important part, I asked this question because I wanted to make sure that it wouldn't get passed over by some human who was having a bad day and decided that my salutation was inappropriate. Not likely when the most likely reader of that email is a programmer type, I know, but I figured that it wouldn't hurt not to be sloppy.

    Read the article

  • Ubuntu 12.04 Network Manager: unable to save manual setting to set up a static ip

    - by Andy
    I am fairly familiar with setting up servers, and ubuntu is generally my flavor of choice, but I just installed 12.04 desktop and I am seeing some behavior in network manager that is really puzzling. The network connection works fine if I leave it set on dhcp, but I would like a static IP address for my new web server. When I go into network manager and edit the connection for the one and only NIC I can select MANUAL from the dropdown menu but as soon as I do the Save button becomes greyed out. Even after filling out all fields for the connection it is still grey and I am unable to save the static IP connection information. Any thoughts would be greatly appreciated. I'm hoping there is just some new setting that I am unaware of.... On another note, if I stop the network manager and go edit the interfaces file (and the appropriate hosts/routes/dns files), I do get a static ip address assigned and I can contact my server from the outside, however, the server cannot find the internet. Can't ping even its own ip... I can ping the loopback interface though. I'm really confused on this one. Hoping someone can offer some help.

    Read the article

  • Is there a way to make scp run faster on a Mac OS X?

    - by paul_sns
    I'm trying to a upload a Flex generated SWF file from my Macbook (running Snow Leopard) using the command scp main.swf server.com:/ I had setup key authentication to prevent typing the user/pass every time. This process normally takes up to two minutes using my connection at home (768kbps down/300+ kbps up). The interesting part is that when I use WinSCP in my Windows XP machine, the process only takes 30 seconds max. Both my MacBook and Windows XP machine use the same internet connection. The MacBook is connected to the router via cable (which should be faster right?) while the Windows XP connects through Wifi. Let me know if you need additional information in order to diagnose the problem. Thanks!

    Read the article

  • Regular Expression to replace part of URL in XML file

    - by Richie086
    I need a regular expression in Notepad++ to search/replace a string. My document (xml) has serveral thousand lines that look similar to this: <Url Source="Output/username/project/Content/Volume1VolumeName/TopicFileName.htm" /> I need to replace everything starting from Volume1 to .htm" / to replaced with X's or some other character to mask the actual file names in this file. So the resulting string would look like this after the search/replace was performed: <Url Source="Output/username/project/Content/Volume1XxxxxxXxxx/XxxxxXxxxXxxx.htm" /> I am working with confidential information that I cannot release to people outside of my company, but i need to send an example log file to a 3rd party for troubleshooting purposes. FYI the X's do not need to follow the upper/lower case after the replacement, i was just using different case X's for the hell of it :)

    Read the article

  • Text template or tool for documentation of computer configurations

    - by mjustin
    I regularly write and update technical documentation which will be used to set up a new virtual machine, or to have a lookup for system dependencies in networks with around 20-50 (server-side) computers. At the moment I use OpenOffice Writer with text tables, and create one document per intranet domain. To improve this documentation, I would like to collect some examples to identify areas where my documents can be improved, regarding general structure and content, to make it easy to read and use not only for me but also for technical staff, helpdesk etc. Are there simple text templates (for example for OpenOffice Writer) or tools (maybe database-driven) for structured documentation of a computer configuration? Such a template / tool should provide required and optional configuration sections, like 'operating system', 'installed services', 'mapped network drives', 'scheduled tasks', 'remote servers', 'logon user account', 'firewall settings', 'hard disk size' ... It is not so much low-level hardware docs but more infrastructure / integration information in these documents (no BIOS settings, MAC addresses).

    Read the article

  • When copying VM filesystem over netcat, dd copies double the disk size

    - by JivanAmara
    I'm attempting to copy the disk of a working headless virtualbox VM (VM1) on one server to a new VM (VM2) on a vCloud server. I don't have access to the host of VM2. The OS is Windows Server 2003 (32-bit) I start both VMs with a live Knoppix image. I run 'nc -l | dd of=/dev/sda bs=512' on VM2 I run 'dd if=/dev/sda bs=512 | nc ' on VM1 I previously did this with another windows VM and it worked fine. VM1 has a disk of size ~70GB (verified with fdisk); however, the amount of data dd reports read/written is ~139GB. Of course the target machine doesn't work properly. I get a Windows splash screen, then blue error screen with general 'system not working' information. I'm at a loss what could cause this. Any ideas?

    Read the article

  • Free / Cached / Available memory on Linux

    - by pkoraca
    I have read that linux uses free memory for caching, to make system faster. However, both Nagios and Paessler PRTG monitoring system show me that my memory usage is critical. I could change Nagios mem_usage script to sum free and cached memory, but would that be correct information? I doubt that they misunderstood Linux memory usage. Lets say I have 8 GB RAM. 5 GB are used, 2 GB is cached, and I have 1 GB of free memory. Real available memory should be free+cached (3 GB)? If some new application would need additional 3 GB RAM, could it take everything from cache and free without using swap, or is there a minimum that should be in cache? Real example: $ cat /proc/meminfo MemTotal: 5984256 kB MemFree: 137052 kB Buffers: 140484 kB Cached: 3439616 kB SwapCached: 244 kB Active: 3148824 kB Inactive: 2341768 kB ... My monitoring tools show that I have 137 MB free RAM, however I have ~3,5 GB in Cache. Thanks!

    Read the article

  • Why is the size of windows off by 226x238 if defined via the Window Rules?

    - by Bobby
    I have installed Sawfish 1.8.2 from source on my new Ubuntu 12.04 installation following the Debian instructions, but I had this problem also with the stock 1.5.3. Whenever I define dimensions in the Window Rules for a window, the size is off by exactly 226x238 pixels, which means that 100x100 turns into 326x328. That's a very odd behavior, given that Sawfish is saving and loading the dimensions of the windows correctly (if saved via the window menu). Some additional system information: $ uname -a Linux Dagon 3.2.0-24-generic-pae #39-Ubuntu SMP Mon May 21 18:54:21 UTC 2012 i686 i686 i386 GNU/Linux $ sawfish --version sawfish version 1.8.2 nvidia proprietary driver, 9600GT Two monitors, 1920x1080 + 1440x900 in one session. Positionng the windows is working fine, only the dimensions are off by that odd number. Does somebody have an idea why?

    Read the article

  • Question on business connections and page rank?

    - by Viveta
    I just want to ask this question to get a yes no answer on something that I've been wondering on lately. So regarding how there are countless numbers of sites now that use the no-follow; making it harder to get ranking for your page if your website information might be something useful and will get traffic but maybe isn't something that your business connections share content of; but I am trying to find out if the benefit to having a bunch of say "likes" to your facebook page, but all the connection to your website's content isn't passing any benefit to your main page. So are you then competing with your own website in regards to SERPs to your facebook page and that of your home page. Am I correct on this; that if you start having your facebook page doing real good as far as connections and likes (helping bump up your facebook PageRank) but if you have links on your page with certain optimized keywords, that there is no benefit to your website (other than people getting to your facebook page, and then more likely to click to your page). Hope I explained it well what I am asking. Just wanted to get a better picture of this to know what I want to focus on as far as how I'll be linking to my desired landing pages in the future.

    Read the article

  • XAMPP MySQL stops running after ~1.5 seconds

    - by Nona Urbiz
    I have tried installing it as a service. Nothing seems to work! I have checked the status page and MySQL is listed as "Deactivated". When trying to open phpMyAdmin I get: Error MySQL said: Documentation #1045 - Access denied for user 'root'@'localhost' (using password: NO) Connection for controluser as defined in your configuration failed. phpMyAdmin tried to connect to the MySQL server, and the server rejected the connection. You should check the host, username and password in your configuration and make sure that they correspond to the information given by the administrator of the MySQL server. and from the CD demo: Warning: mysql_connect() [function.mysql-connect]: Access denied for user 'root'@'localhost' (using password: NO) in C:\xampp\htdocs\xampp\cds.php on line 77 Could not connect to database! Is MySQL running or did you change the password?        Thanks for any suggestions or help you can give!

    Read the article

  • How secure is Microsoft 2007's encryption?

    - by ericl42
    I've read some various articles about Microsoft's encryption, and from what I gather, 2007 is secure using all default options due to it using AES, and 2000 and 2003 can be configured secure by changing the default algorithm to AES. I was wondering if anyone else has read any other articles or know of any specific vulnerabilities involved with how they implement the encryption. I would like to be able to tell users that they can use this to send semi sensitive documents as long as they use AES and a strong password. Thanks for the information.

    Read the article

  • Recovering an old website

    - by noah
    I have a client with an old website that somebody setup for him long ago. The guy who set it up is unreachable, so how do we go about trying to take it over? A WHOIS lookup got us some contact information, but I don't have great hopes for that (it hasn't been update in quite some time). The nameservers are ns1.theplanet.com and ns2.theplanet.com, and we will try calling them, but I don't expect we'll be able to get much from them. What are our options? Is there a way I can discover the registrar so we can try contacting them as well? EDIT: It would be sufficient if we could get control of the domain name or put in some sort of redirect to the new site. Either hosting was prepaid for quite some time, or someone else is still paying for it, so we don't care about that.

    Read the article

  • Employee Engagement: Drive Business Value

    - by Kellsey Ruppel
    As we’ve been discussing this week, employee engagement is extremely important and you’ve probably realized that effectively engaging your employees is essential to driving business value. Your employees are the ones responsible for executing on the business’ objectives. Your employees (in the sales & service departments) are the ones interacting with your customers the most, so delivering on customer expectations and attaining high levels of customer engagement are simply not possible without successfully empowering these this stakeholder group. High employee and partner engagement can have many benefits including: Higher levels of employee productivity Longer employee retention Stronger, more enduring and more successful relationships Serving as ambassadors for an organization’s brand More likely to deliver excellent customer service Referring others for hire Recommending the organization’s products and services Sharing feedback with their colleagues In a way, engagement is a measure of employee investment in an organization’s mission and brand. And then you have the enablement piece of this as well.  It’s hard to imagine a high level of engagement existing among employees who don’t feel that they’ve been enabled to do their jobs very efficiently or effectively. You’re just not going to find high engagement among people if the everyday processes and technologies  they work with make it a challenge for them to access, share and manage the information  they need do their jobs or if they’re unable to effectively collaborate around the projects they’re working on. How does your organization measure on the employee engagement spectrum? We’ve got a number of different resources to help you get started! Portal Resource Center Video: Got a minute? WebCenter in Action Webcast Series Portal Engagement Webcast 

    Read the article

  • Blocking IP's Nginx behind proxy

    - by FunkyChicken
    I'm running a Nginx 1.2.4 webserver here, and I'm behind a proxy of my hoster to prevent ddos attacks. The downside of being behind this proxy is that I need to get the REAL IP information from an extra header. In PHP it works great by doing $_SERVER[HTTP_X_REAL_IP] for example. Now before I was behind this proxy of my hoster I had a very effective way of blocking certain IP's by doing this: include /etc/nginx/block.conf and to allow/deny IP's there. But now due to the proxy, Nginx sees all traffic coming from 1 IP. Is there a way I can get Nginx to read the IP's like how PHP does, with the X-REAL-IP header?

    Read the article

  • XNA on the TechNet Wiki

    - by Michael B. McLaughlin
    Many months ago I came across an interesting Microsoft website, the TechNet Wiki, when I was looking for information about something that I can’t even remember anymore. I noticed at the time that its section on gaming technologies was sparse and even exchanged a few emails with one of the friendly Microsoft employees who contributes there regularly about some ideas I had for the site. I seem to recall mentioning my intentions to add some articles on XNA when I found the time but between one thing and another it seemed like I was busy from the end of last Summer straight through ‘til now. Yesterday I came across the TechNet Wiki link in my miscellaneous links collection and remembered my intentions many months ago. I decided that adding XNA pages to it would make a nice project to work on while taking breaks from my other projects. So I wrote my first two articles for it: XNA Framework Overview and Content Pipeline Overview. I hope to add more in the coming days and weeks. I’d be delighted if some of my fellow XNA enthusiasts out there joined in, time permitting. Anyone else who’d like to add a page or two on a topic area you’re familiar with, this seems like a great opportunity to contribute to the community and help build a nice knowledge base to benefit all of us who are always interested in learning something new!

    Read the article

  • Does ZFS cache Compressed or Uncompressed data in a ZFS file-system with compression turned on?

    - by George Bailey
    ZFS supports file-system compression and it also caches frequently or recently accessed data. If a system has lots of CPU but the underlying data storage system is slow. It is possible that ZFS would perform better with compression turned on. This can be easily tested when writing files by measuring CPU and disk usage and throughput. (of course latency may exist,, but this would not be an issue for large files). But what about cache? If data will have to be decompressed every time it is read then this is probably less of a good idea. Is the cached data compressed?. Does anybody have some information on this?

    Read the article

  • Capture documents in bitonal, or grayscale then downsample

    - by Jason R. Coombs
    I'm about to embark on a document archival process. I'm going to spend a lot of good money to archive some paper (actually microfiche) to TIFF images. I have a choice of 300-dpi bitonal (2-bit, black/white) or 300-dpi grayscale (8-bit). Cost is the same for either format. Data volume (and thus image size) is not a factor. It seems to me that the grayscale, since scanned at the same resolution as the bitonal, would always contain more information and could always be downsampled to the equivalent bitonal image. Are there any downsides to selecting grayscale, and then later downsampling to bitonal if desired? In other words, is it possible that the scanning software will perform a more accurate (or more legible) representation than a grayscale image converted to bitonal?

    Read the article

  • Jet Brains release WebStorm 5.0

    - by TATWORTH
    At http://www.jetbrains.com/webstorm/whatsnew/index.html?WS50ROW, Jet Brains have announced the release of WebStorm 5.0, an IDE that brings the ease of code writing in VB.NET and C# that you get with ReSharper, to JavaScript, CSS and LESS. (There are some more details in http://blog.jetbrains.com/webide/2012/08/liveedit-plugin-features-in-detail/)Code completion in JavaScript, CSS and LESS is a very welcome feature. I look forward to trying out Web Storm. The download at http://www.jetbrains.com/webstorm/download/index.html comes with a free 30-day trial).Price information is at http://www.jetbrains.com/webstorm/buy/index.jsp - you should note that if you are an open-source developer, you can apply for a free license. The price of a personal license at £23 + VAT is a no-brainer. The price of a Commercial license would have been paid for in a few days of the increased productivity that this tool brings.Web Storm currently requires Google Chrome to run. Like ReSharper it appears to be a very able tool. It includes tools such as:XSLT debuggingJSLint for checking for JavaScript errorsJavaScript debuggingJavaScript unit testing (including code coverage)JavaScript folding regionsCoffeeScript supportWell I suggest that you try WebStorm 5.0

    Read the article

  • Does Intel Smart Response provide any statistics on the cache usage?

    - by Tom Seddon
    I've set up my Z68-based Core i7 PC with a 60GB SSD dedicated as a Smart Response cache drive. Is there any way I can get any statistics out of it? It would be nice to have some information on how much cache space is actually being used, maybe how much of it was actually accessed recently, and how many reads in general are coming from the SSD rather than from the mechanical disk. These statistics might help to quickly provide some evidence for or against the use of Smart Response, without my having to reinstall Windows on the SSD (etc.) to find out. The Windows ReadyBoost feature has some performance counters you can access via the Windows 7 perfmon tool, for example, which is the kind of thing I'm hoping is somehow available. Smart Response provides no perfmon counters, though, and the Intel Rapid Storage Utility tells you pretty much nothing except that Smart Response is switched on.

    Read the article

< Previous Page | 559 560 561 562 563 564 565 566 567 568 569 570  | Next Page >