Search Results

Search found 18329 results on 734 pages for 'interpret order'.

Page 552/734 | < Previous Page | 548 549 550 551 552 553 554 555 556 557 558 559  | Next Page >

  • Can’t connect to SQL Server 2008 - looks like Shared Memory problem

    - by Proposition Joe
    I am unable to connect to my local instance of SQL Server 2008 Express using SQL Server Management Studio. I believe the problem is related to a change I made to the connection protocols. Before the error occurred, I had Shared Memory enabled and Named Pipes and TCP/IP disabled. I then enabled both Named Pipes and TCP/IP, and this is when I started experiencing the problem. When I try to connect to the server with SSMS (with either my SQL server sysadmin login or with windows authentication), I get the following error message: A connection was successfully established with the server, but then an error occurred during the login process. (provider: Named Pipes Provider, error: 0 - No process is on the other end of the pipe.) (Microsoft SQL Server, Error: 233) Why is it returning a Named Pipes error? Why would it not just use Shared Memory, as this has a higher priority order in the list of connection protocols? It seems like it is not listening on Shared Memory for some reason? When I set Named Pipes to enabled and try to connect, I get the same error message. My windows account is does not have administrator priviliges on my computer - perhaps this is making a difference in some way (as some of the discussions in this post about an "SuperSocketNetLib\Lpc" registry key seems to suggest). I have tried restarting the SQL Server service, by the way, and also tried to get someone to log onto the machine with an admin account to restart the SQL Server service. Still no luck.

    Read the article

  • Problem with Amiga 1200 accelerator board

    - by cc0
    I just recently walked past a dump, where in the corner of my eye I spotted something that looked like a huge keyboard. I went to take a closer look, and found out that it was an Amiga 1200 with a 030 accellerator board and scala dongle. Jackpot! So anyway; I dried it, cleaned it, it works, but the floppy was not powering on and same with the harddrive. I am using an old Amiga 1200 PSU that was making some strange high pitch noise when I tried to boot the amiga with the harddrive installed in it. I removed the harddrive and it booted fine with the PSU not emitting any detectable noise. However, when I have the 030 installed it sometimes reboots and shows a red "Software Error" screen. I tried removing the memory on the board, same effect. Sometimes it does not boot at all, just gives a black screen. Someone suggested the card had problems with 3.1 roms, but this amiga has only 3.0 roms installed. Does anyone have any apparent theories as to why it seems unstable? I don't have any other Amiga parts to cross-swap with to test a lot of things, so I'd really appreciate some sound input here so I'd know what to look for in order to try fix it. And merry Christmas everyone :]

    Read the article

  • customer wont provide ssh access - ftp only

    - by Max
    Eh, here is my problem: I am working in a webdevelopment agency (thats a problem but not the real problem, read on). Most of the time I choose the live server myself when creating a new website project. But now the customer already has a "server" (10 GB on a cheapo host!) and the "admin" refuses to give me ssh access to it. But I need to access the server via shell because many files will be transported (need to be able to upload and extract a tar) and I need to insert or create mysql dumps via command line. He argues FTP and phpmyadmin should be enough... as far as I know the webspace was just ordered to host the website, so no security critical apps are running there. How can I either convince the admin to give me the ssh login or tell management that we need our own server? Anyone with similiar experiences? This is really annoying as this is a very small project that should be done fast and now one has to fight in order to just get the work done...

    Read the article

  • Cannot connect to my VPN Server from another network

    - by SantaC
    ok here is the deal. I have a Windows 2008 R2 server with RRAS installed configured for VPN. I also have DHCP running. On my DC I have AD running and they're connected with my domain. I am only using one NIC though. As a client I have Windows 7. So I tried connecting to my VPN server through my own network, which worked fine, so the setup is correct. However, when I tried connecting to my VPN server on another network, it does not work. I went to my brothers home and tried connecting to my server but it did not pass. So on my VPN server I have ip: 192.168.2.99 At my brothers house, i did the configuration on his windows 7 and it cannot connect to that ip. I am operating on the 192.168.2.1 network and he is operating on the 192.168.0.1 network. So how do I configure his client in order to get it to work? I tried changing his ip to the 192.168.2.x network, but i am not sure you can do that. I need some help here what to do.

    Read the article

  • Directories Throwing 404 Errors - Virtual Host Configuration and mod_rewrite

    - by nicorellius
    On my production server, things are fine: PHP extension removal and trailing slash rules are in place in my .htaccess file. But locally, this isn't working (well, partially, anyway). I'm running Apache2 with a virtual host for the site in question. I decided to not use the .htaccess file in this case and just add the rules to the httpd-vhosts.conf file instead, which, I've heard, if possible on your server, is a better way to go. The virtual host is working and the URL I use for my site is like this: devserver:9090 Here is my httpd-vhosts.conf file: NameVirtualHost *:9090 # for stuff other than this site <VirtualHost *:9090> ServerAdmin admin@localhost DocumentRoot "/opt/lampstack/apache2/htdocs" ServerName localhost </VirtualHost> # for site in question <VirtualHost *:9090> ServerAdmin admin@localhost DocumentRoot "/opt/lampstack/apache2/htdocs/devserver" ServerName devserver <Directory "/opt/lampstack/apache2/htdocs/devserver"> Options Indexes FollowSymLinks Includes AllowOverride None Order allow,deny Allow from all </Directory> <IfModule rewrite_module> RewriteEngine ON # remove PHP extension and add trailing slash # note - this doesn't work for directories, and throws 404 # TODO - fix so directories use index.php RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{THE_REQUEST} ^GET\ /[^?\s]+\.php RewriteRule (.*)\.php$ /$1/ [R=302,L] RewriteCond %{REQUEST_FILENAME} !-d RewriteRule (.*)/$ /$1.php [L] RewriteCond %{REQUEST_FILENAME}.php -f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule .*[^/]$ /$0/ [R=302,L] </IfModule> # error docs ErrorDocument 404 /errors/404.php </VirtualHost> The problem I'm facing is that when I go to directories on the site, I get a 404 error. So for example, this: devserver:9090/page.php goes to devserver:9090/page/ but going to a directory (that has an index.php): devserver:9090/dir/ throws 404 error page. If I type in devserver:9090/dir/index.php I get devserver:9090/dir/index/ and the contents I want appear... Can anyone help me with my rewrite rules?

    Read the article

  • Routing connections through VPN based on hostname (not IP range)

    - by Michal M
    This bugs me immensly. I need to connect to client's network through VPN. But I definitely do not want to send all the traffic through client's network so this option is out of question. What I need basically is for the OS to know that all client's network subdomains (*.example.com) need to go through the VPN connection. I tried a couple of options: Changing order of services and setting the VPN on top, but this works the same as "Send all traffic over VPN connection". Using "VPN on Demand" option from network advanced options, but this feature is quite rubbish to be honest. Seems to work only in Safari (?!) and it doesn't route the connection, but it basically triggers the OS to connect to the selected VPN. The reason I need it to work based on hostnames rather than IP range is simple - my client has a lot of servers inside his network and it's impossible for me to remember all IPs. They are all within a range, but this doesn't help me remembering. Another option would be to put the VPN connection on the bottom of network services and untick "Send all traffic..." and then put all known hostnames in hosts file, but considering there could be hundreds of servers (therefore hostnames and ips too) it ridiculous job. And if new server appears on the network I'd need to edit the hosts file again. Sisyphean labours. However this works on Windows very simply. If a hostname is not available through default network interface, then it seems to try VPN connection and this works brilliantly. So, how can I achieve that on Mac, then? I know client's internal DNS addresses if that is of any help (like directing a certain domains through a different DNS)? PS. Using latest version 10.6.6. PS2. I am using VPN to access intranet, version control servers (svn://), samba shares and for SSH access to servers.

    Read the article

  • fglrx-legacy-driver not seeing Radeon HD 4650 AGP

    - by Rocket Hazmat
    I am running Debian Squeeze on an old Dell Dimension 8300 box. It has an AGP Radeon HD 4650 card. I use this machine to mine bitcoins, and today I noticed that the machine had rebooted! My precious uptime! Anyway, my miner wouldn't start, so I figured might as well update my graphics driver, maybe that would fix the issue. I went to amd.com and downloaded the newest driver (12.6 legacy), but after installing it, aticonfig gave an error: aticonfig: No supported adapters detected I uninstalled the driver and figured I'd try to install it from apt. AMD has dropped support for the HD 4000 series in fglrx, forcing me to use fglrx-legacy-driver (currently only in experimental). In order to install this, I had to update libc6 (and some other important packages, like gcc), I had to use their wheezy versions. I finally got glrx-legacy-driver installed, but I still got: aticonfig: No supported adapters detected Why isn't the driver finding my video card? I have a hunch it has something to do with the fact that it's an AGP video card. Here is the output of lspci -v (why does it say Kernel driver in use: fglrx_pci?): 01:00.0 VGA compatible controller: Advanced Micro Devices [AMD] nee ATI RV730 Pro AGP [Radeon HD 4600 Series] (prog-if 00 [VGA controller]) Subsystem: Advanced Micro Devices [AMD] nee ATI Device 0028 Flags: bus master, 66MHz, medium devsel, latency 64, IRQ 16 Memory at e0000000 (32-bit, prefetchable) [size=256M] I/O ports at de00 [size=256] Memory at fe9f0000 (32-bit, non-prefetchable) [size=64K] Expansion ROM at fea00000 [disabled] [size=128K] Capabilities: [50] Power Management version 3 Capabilities: [58] AGP version 3.0 Kernel driver in use: fglrx_pci

    Read the article

  • Apache Virtual Host Issue

    - by Nik
    I think I hate Apache now, but on with the issue. It might be a configuration error on my end or just my inability to see what's right in front of me, but I'm trying to configure a sub-domain in Apache and no matter what, it always redirects the sub-domain to the web root of the main domain. My configuration is posted below (and yes, the domain name information was purposefully modified): <VirtualHost *> DocumentRoot /var/www/root/ ServerName example.com <Directory /var/www/root/> allow from all Options +Indexes </Directory> </VirtualHost> <Directory /usr/share/squirrelmail> Options Indexes FollowSymLinks <IfModule mod_php5.c> php_flag register_globals off </IfModule> <IfModule mod_dir.c> DirectoryIndex index.php </IfModule> # access to configtest is limited by default to prevent information leak <Files configtest.php> order deny,allow deny from all allow from 127.0.0.1 </Files> </Directory> # users will prefer a simple URL like http://webmail.example.com <VirtualHost *> DocumentRoot /usr/share/squirrelmail/ ServerName squirrelmail.example.com </VirtualHost>

    Read the article

  • Shrinking windows and recovery partitions on the samsung new series 9

    - by bobbaluba
    I just bought a samsung NP900X3C, and as I was going to install linux, I noticed the windows partitions and recovery partitions occupied a major portion of the disk. The disk is a 128 GB SSD, and I want to keep the windows partition in order to play some games once in a while, but the windows disk is already 45GB full (with no installed programs) and the recovery partition is 20GB. That leaves under 60 GB for linux, which is not optimal, since that is what I'm going to be using most of the time, and there would be no room for games on the windows partition. There are also two small partitions that I don't know what are doing, one 100mb at the start of the disk that I'm guessing is some kind of boot partition, and one 5GB, that is described as an OS/2 hidden C: drive What I'm wondering is: can i delete the recovery partition? What about the mystical 5gb partition? Here is what fdisk reports: ubuntu@ubuntu:~$ sudo fdisk -l Disk /dev/sda: 128.0 GB, 128035676160 bytes 255 heads, 63 sectors/track, 15566 cylinders, total 250069680 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x83953ffc Device Boot Start End Blocks Id System /dev/sda1 * 2048 206847 102400 7 HPFS/NTFS/exFAT /dev/sda2 206848 198273023 99033088 7 HPFS/NTFS/exFAT /dev/sda3 198273024 207276031 4501504 84 OS/2 hidden C: drive /dev/sda4 207276032 250068991 21396480 27 Hidden NTFS WinRE

    Read the article

  • How to reduce memory consumption an AWS EC2 t1.micro instance (free tier) ubuntu server 14.04 LTS EBS

    - by CMPSoares
    Hi I'm working on my bachelor thesis and for that I need to host a node.js web application on AWS, in order to avoid costs I'm using a t1.micro instance with 30GB disk space (from what I know it's the maximum I get in the free tier) which is barely used. But instead I have problems with memory consumption, it's using all of it. I tried the approach of creating a virtual swap area as mentioned at Why don't EC2 ubuntu images have swap? with these commands: sudo dd if=/dev/zero of=/var/swapfile bs=1M count=2048 && sudo chmod 600 /var/swapfile && sudo mkswap /var/swapfile && echo /var/swapfile none swap defaults 0 0 | sudo tee -a /etc/fstab && sudo swapon -a But this swap area isn't used somehow. Is something missing in this approach or is there another process of reducing the memory consumption in these type of AWS instances? Bottom-line: This originates server freezes and crashes and that's what I want to stop either by using the swap, reducing memory usage or both.

    Read the article

  • EXCEL 2007 macro

    - by Binay
    I have a macro which connects to db and fetches data for me and makes it comma separated. But the problem is the comma is getting appended to the last row, which I don't want. I'm struggling here. Could you please help out? Here is the part from the code. If cn.State = adStateOpen Then Rec_set.Open "SELECT concat(trim(Columns_0.ColumnName), ' ','(', 'varchar(2000)' ,')') columnname FROM DBC.Columns Columns_0 WHERE (Columns_0.TableName= " & Chr(39) & Tablename & Chr(39) & "and Columns_0.Databasename=" & Chr(39) & db & Chr(39) & ")ORDER BY Columns_0.Columnid;", cn 'Issue SQL statement If Not Rec_set.EOF And Not Rec_set.EOF Then Do Until Rec_set.EOF For i = 0 To Rec_set.Fields.Count - 1 strString = strString & Rec_set(i) & "," Next strFile.WriteLine (strString) strString = "" Rec_set.MoveNext Loop Here is the result I am getting. EMPNO (varchar(2000)), ENAME (varchar(2000)), JOB (varchar(2000)), MGR (varchar(2000)), HIREDATE (varchar(2000)), SAL (varchar(2000)), COMM (varchar(2000)), DEPTNO (varchar(2000)), I don't want the last comma.

    Read the article

  • How do you get linux to honor setuid directories?

    - by Takigama
    Some time ago while in a conversation in IRC, one user in a channel I was in suggested someone setuid a directory in order for it to inherit the userid on files to solve a problem someone else was having. At the time I spoke up and said "linux doesn't support setuid directories". After that, the person giving the advice showed me a pastebin (http://codepad.org/4In62f13) of his system honouring the setuid permission set on a directory. Just to explain, when i say "linux doesnt support setuid directories" what I mean is that you can go "chmod u+s directory" and it will set the bit on the directory. However, linux (as i understood it) ignores this bit (on directories). Try as I might, I just cant quite replicate that pastebin. Someone suggested to me once that it might be possible to emulate the behaviour with selinux - and playing around with rules, its possible to force a uid on a file, but not from a setuid directory permission (that I can see). Reading around on the internet has been fairly uninformative - most places claim "no, setuid on directories does not work with linux" with the occasional "it can be done under specific circumstances" (such as this: http://arstechnica.com/etc/linux/2003/linux.ars-12032003.html) I dont remember who the original person was, but the original system was a debian 6 system, and the filesystem it was running was xfs mounted with "default,acl". I've tried replicating that, but no luck so far (tried so far with various versions of debian, ubuntu, fedora and centos) Can anyone clue me in on what or how you get a system to honor setuid on a directory?

    Read the article

  • Linux SFTP and many local user accounts, limits with mount --bind?

    - by user123428
    I am in the process of building a solution to handle many developers (possibly hundreds) to work on their files via sftp, each one Jailed in their home directory. For our particular needs, we have a samba mount point that contains all of the users home directories. I have started developing the following solution and hit some walls: - I have configured a Ubuntu Lucid Server as sftp server. - In order to jail the user in their home directory (without allowing them the browse a directory up and seeing all the other users folders) I am using mount --bind and not a symbolic link (also some ftp clients don't really work with sym links). - The user accounts are local unix user accounts on the sftp server (not using a directory service or anything) that have an empty home folder created on the local machine, then I use mount --bind to bind the empty folder to the actual users home directory on the samba share. With this solution I am hitting a couple of problems, in the case of a server reboot, all the mount --binds are lost because they are not written in fstab. Then I have read somewhere that the maximum amount of entries in fstab are 400 (which does not really help us). I have thought of a solution of writing something that stores the mounts in a text file as a backup and on server reboot, run the script that re mounts. I am just really unsure about this whole process and was wondering if anyone has any insight on possibly a better solution for SFTP? (not FTP)

    Read the article

  • Cloned Centos 6.4 websrver for test purpose. Virtual host, .htaccess, redirecting url issue

    - by Shogoot
    I see similar questions, but not my exact challenge. What I have done so far I cloned a prod server over to a vmware to use it as a test server for new functionality I'm going to write. I'm not a sysadmin by trade, but I'm new to this company and I have to do some thing that are outside of my comfort zone (thats a good thing :) ) The prod server has 2 sites on it s1.com and s2.com. In /html/s1/, /html/s2/ there's an .htaccess file under each s*/. Looking like this: RewriteEngine ON RewriteBase / RewriteCond %{QUERY_STRING} id=([0-9]+) RewriteRule ^.* %1.htm RewriteCond %{QUERY_STRING} page=modules/checkout RewriteRule ^.* order.php RewriteCond %{QUERY_STRING} page=pages/sidekart RewriteRule ^.* pages/sidekart.htm The issue is that s1 has a lot of pages that really belongs under a third domain s3, the rule in line 4 and 5 redirects them to /html/s1/. An example of such URL is: s3.com/?page=modules/product&id=521614 I'm trying then to get those URLs (without modifying the URL) to redirect to s3's /html/s3/ server structure, which I set up making a new virtualhost s3 in test servers httpd.conf with a test3.com as servername and changing the other sites to tests1.com and tests2.com, and adding .htaccess also to this s3 root directory, and making a html/s3/ directory structure I populated with an index.html, etc. But, when I take the same URL (s3.com/?page=modules/product&id=521614) changing it to tests3.com/?page=modules/product&id=521614, I get s1's index page showing up in my browser. I've poked around about a day now and i cant figure out why this happens.

    Read the article

  • Plesk command working in manual script, not in cronjob

    - by dsaunier
    Hi, In order to install a hosting plan, I use Plesk's commands in SSH as specified in their official guide. When typed directly in SSH (Putty), it works perfectly. The line is as follows with obviously values hard coded when in CLI: /usr/local/psa/bin/domain --create '.$url.' -owner mynamehere -ip '.IP_SERVER_PLESK.' -status enabled -hosting true -hst_type phys -login '.$ftp_user.' -passwd '.$ftp_pw.' -www false -php true -php_safe_mode false -hard_quota 100M I then put that request in a php script that does other things after hosting is installed. Now for the weird part: when calling that script from CLI it also works fine, I do a ./myscript.php and it installs the hosting, then sends emails etc. However after I create a cronjob to have that same script called regularly, then the Plesk command fails. The cronjob is started in Plesk as */15 * * * * /usr/bin/php /home/scripts/myscript.php and it works fine for everything BUT the Plesk hosting install, that returns "Unable to read Control Panel configuration file" and therefore does not install the domain hosting. Still this is the same script that I call manually ! On that server are the PHP used to call a cronjob and the one used in CLI different ? What do I miss, help greatly appreciated ! Regards.

    Read the article

  • Iptables and system-config-firewall

    - by nivde92
    I had a set of netfilter rules set with iptables, but someone else told me to use system-config-firewall to add a rule for sharing files with Windows. (Samba) This rewrote the iptables rules file and I lost my own custom rules. I have a backup copy, but am having trouble restoring them. Edit: The server is Centos, I already tried to restore the rules with iptables-restore < /root/working.iptables.rules but for some reason the rules don't change. What are you trying to do? Trying to restore the iptable rules that I have in a backup file. What have you tried in order to make it happen? I've tried to modify the iptables file with vim, since the command iptables-restore was no help. What results did you expect? To get the old rules back. What actually happened? Nothing, when I run the command or edit the file by hand the file doesn't change at all. Maybe something else it's overwriting.

    Read the article

  • Network outage caused by SMC8013WG Cable Modem/Router ?

    - by mkocubinski
    At work, we have a basic Class C Network. The gateway/router is a SMC8013WG (stock comcast commercial cable modem), and simple unmanaged switch (HP Pro Curve 1400 24G). The SMC8013WG is our default gateway as well as DHCP server. Periodically, I'd say almost every other day.. the entire network will just stop responding. I won't be able to ping/see the gateway, any computers on our local network, or anything on the internet. The only way to fix this is to unplug the Comcast cable modem, wait, and plug back in. This unfailingly fixes the problem. But this doesn't make much sense to me.. shouldn't the network still be fine locally, since everything is plugged into the switch anyway? Why would resetting the router fix this? Can anyone suggest anything to check to in order to narrow this problem down? Just to be clear.. here is the basic topology: { Internet } -- (12.345.67.89) Comcast Cable Modem (192.168.1.1) -- Switch -- 192.168.1.2-254 P.S. Our IT guy is in about 3 hours a day every other week or so, so.. we're kind of on our own most of the time.

    Read the article

  • Nginx and 1000 WordPress Installs - Optimization

    - by GTE
    Hey, I'm trying to create a rather unusual (imo) configuration where I have: nginx php-fastcgi mysql 1000 seperate WordPress installs (with WP Super Cache). Each WP install corresponds to a seperate subdomain. Furthermore, I have 1000 cron jobs being called every hour that in turn call a WP plugin (using wget) which retrieves data from an API and posts it to the respective blog. This is all being run on a virtual server with 1024MB of RAM, 4 shared processors, etc. The server is not doing well, especially during the times that the cron jobs are being executed. Nginx constantly throws 504 errors and the site has a significant lag. 1) Am I crazy for having 1000 individual WP installs? Should I be using WP-MU and will this help significantly? (I have certain plugin restrictions that I prefer having seperate installs but could switch if need be.) 2) Instead of having 1000 unique cron jobs - should be calling say a bash script that will then process the 1000 HTTP requests I need? Could this be done in a succesive order instead of a sequential one? 3) Any other kind of suggestion you may have for optimization? Should I be proxying to Apache instead of just using nginx, etc. Any kind of advice would be appreciated. Thanks in advance

    Read the article

  • Intermittent 403 errors when using allow to limit access to url with both explicit IP and SetEnvIf

    - by rbieber
    We are running Apache 2.2.22 on a Solaris 10 environment. We have a specific URL that we want to limit access to by IP. We recently implemented a CDN and now have the added complexity that the IP's that a request are shown to be coming from are actually the CDN servers and not the ultimate end user. In the case that we need to back the CDN out, we want to handle the case where either the CDN is forwarding the request, or the ultimate client is sending the request directly. The CDN sends the end user IP address in an HTTP header (for this scenario that header is called "User-IP"). Here is the configuration that we have put in place: SetEnvIf User-IP (\d+\.\d+\.\d+\.\d+) REAL_USER_IP=$1 SetEnvIf REAL_USER_IP "(10\.1\.2\.3|192\.168\..+)" access_allowed=1 <Location /uri/> Order deny,allow Allow from 10.1.2.3 192.168. allow from env=access_allowed Deny from all </Location> This seems to work fine for a time, however at some point the web server starts serving 403 errors to the end user - so for some reason it is restricting access. The odd thing is that a bounce of the web server seems to resolve the issue, but only for a time - then the behavior comes back. It might be worthwhile to note as well that this URL is delegated to a JBoss server via mod_jk. The denial of access is, however; confirmed to be at the Apache layer and the issue only seems to happen after the server has been running for some time.

    Read the article

  • Auto-focus xdvi after running viewdvi in Emacs with AUCTeX.

    - by D Connors
    I've been using emacs with AUCTeX mode to edit my latex documents for a few days now, but there's something that's really bugging me. As it should be, whenever I do C-c C-c RET it compiles the file, and if repeat the command it views the output in xdvi. It's also set to the mini-mode TeX-source-specials-mode, so instead of opening a new window in xdvi it only reloads the window that's already open, brings it to the front, and sends me to wherever the pointer was in emacs (forward search). Now here's the problem: Even though the xdvi window is brought to the front, it's not focused. Instead, the emacs windows stays with focus (and that's where any keyboard input goes). And I keep forgetting about that, which leads me to accidentally editing the source file while trying to navigate in xdvi. Not to mention I'm forced to alt-tab in order to focus xdvi, and alt-tab twice if I just want to get back to emacs. Is there a way around this problem? I just want xdvi to be focused whenever I run the view command from emacs.

    Read the article

  • Powerpoint not drawing in slide properly...

    - by commradepolski
    So got another issue to post about. I have a user here who uses powerpoint a lot, Office 07 with SP2. When he opens up the presentation, powerpoint opens fine without errors, but does not draw in the main slide properly. So to better explain that, the list on the left hand side, that shows the slides and what order they are in, loads up fine. You can see the slides and the content etc. When you click on a slide, to edit it, it does not draw in on the editing screen. Not really sure how to explain that. The screen where the work on the slide is done, is what is affected. This is a screen shot from my pc not the users. So instead of the screen saying "Click to add title" it would be improperly drawn such that if I were to drag an explorer window across it, it would leave a trail. I have tried reinstalling office, updating it, as well as giving the user a new windows image and nothing has helped. Any help or advice is appreciated.

    Read the article

  • How much Ram should I need on my VPS package? Am I being ripped off?

    - by Tamerax
    Hello! So, I'm currently on a VPSVille Cpanel3 account that has 768 MB guaranteed ram and 2048 MB burst ram (full details here: http://www.vpsville.ca/cpanel-vps). It's running CentOS, Cpanel, Apache and FastCGI. On the server itself I have a joomla community site with a forum system that generally has about 20 people on it max at any point and even then, during the evening, no one. It's a pretty small site but has a number of modules running on it. It gets about 6000 visits a month. Also on the server is a wordpress site that gets about 80-150 visits a day, 2 other wordpress sites that aren't developed yet so they don't get any traffic at all and 2 static html websites that also only get about 500 hits a month. All in all, no huge sites. The issue is that I get "out of memory" errors fairly frequently and it kills my server and I need to reboot it in order to get all my sites up and running again. It seems to me that I shouldn't have these issues with that much ram allotted to my account and everytime I send in a support ticket, they just tell me to upgrade the ram. Now, I'm still pretty new to all this so I'm not a good judge of how much I really need for my sites to run. I don't know if my sites really do need this much OR vpsville has oversold there servers, they don't actually have those resources available and I'm getting ripped off. So, how much ram should I be using with my current setup? Thanks!

    Read the article

  • Excel 2013: VLookup for cells that share common characters within cell but are both surrounded by other non-matching text

    - by Kylie Z
    I am pulling information from 2 different databases. The databases use different naming protocol for the exact same item/specified placement however they always have certain components of the name in common. The length of these names can vary throughout each of the databases (see the pic below) so I don't think counting characters would help. I need a formula (probably a vlookup/match/index of some sort) to pair up the names from the 2nd database name with the 1st database name and then place it in the adjacent column(B2) on sheet1. Until this point I've had to match, copy, and paste the pairs manually from one sheet to the other and it takes FOREVER. Any help would be much appreciated!!! For example: Database1 Name in Sheet1,A2: 728x90_Allstate_629930_ALL_JUL_2013_MASSACHUSETTSAUTO_BAN_MSN_ROSMSNAUTOSMASSACHUSETTS_7.2.13 Database2 Name in Sheet2, A13: BAN_MSN_ROSMSNAUTOSMASSACHUSETTS728X90_728X90_DFA Common Factors: "ROSMSNAUTOSMASSACHUSETTS" & "728X90" Therefore A2 and A13 need to pair up In some cases, Database 1 and 2 will have a common name aspect but sizing will be different. They need to have BOTH aspects in common in order to be paired so I would NOT want the below example to pair up. Database1 Name in Sheet1,A2: 728x90_Allstate_629930_ALL_JUL_2013_MASSACHUSETTSAUTO_BAN_MSN_ROSMSNAUTOSMASSACHUSETTS_7.2.13 Database2 Name in Sheet2, A12: BAN_MSN_ROSMSNAUTOSMASSACHUSETTS300X250_300X250_DFA Common Factor: Only "ROSMSNAUTOSMASSACHUSETTS" matches. "728x90" is not equal to "300X250" - Sizing is different so they should not be paired.

    Read the article

  • Can’t connect to SQL Server 2008 - looks like Shared Memory problem

    - by user38556
    I am unable to connect to my local instance of SQL Server 2008 Express using SQL Server Management Studio. I believe the problem is related to a change I made to the connection protocols. Before the error occurred, I had Shared Memory enabled and Named Pipes and TCP/IP disabled. I then enabled both Named Pipes and TCP/IP, and this is when I started experiencing the problem. When I try to connect to the server with SSMS (with either my SQL server sysadmin login or with windows authentication), I get the following error message: A connection was successfully established with the server, but then an error occurred during the login process. (provider: Named Pipes Provider, error: 0 - No process is on the other end of the pipe.) (Microsoft SQL Server, Error: 233) Why is it returning a Named Pipes error? Why would it not just use Shared Memory, as this has a higher priority order in the list of connection protocols? It seems like it is not listening on Shared Memory for some reason? When I set Named Pipes to enabled and try to connect, I get the same error message. My windows account is does not have administrator priviliges on my computer - perhaps this is making a difference in some way (as some of the discussions in this post about an "SuperSocketNetLib\Lpc" registry key seems to suggest). I have tried restarting the SQL Server service, by the way, and also tried to get someone to log onto the machine with an admin account to restart the SQL Server service. Still no luck.

    Read the article

  • Did Windows 7 Startup Repair trash My Documents?

    - by Metaphile
    Earlier today, I rebooted my computer. Partway through the boot process, it shut down suddenly. When I tried again, I was prompted to run Startup Repair, and I did. Afterwards, my computer booted normally and everything seemed to be in order. Then I noticed that my My Documents folder contains a mix of old and new files. On closer inspection, it appears that Windows has reverted my system to a previous state. Two things puzzle me: 1) According to Microsoft, "System Restore does not affect personal files, such as e-mail, documents, or photos [...]", yet many of my personal files have been affected. 2) Why were some things reverted, but not others? I had recently reorganized a bunch of files in My Documents. The reverted directory structure seems to be a hybrid of old a new, with a lot of new stuff missing. It's hard to say for sure, but it looks like the stuff that's missing would have been in conflict (two folders with the same name, for example), and Windows favored the old stuff. Is this normal behavior for Startup Repair/System Restore? To modify personal files, I mean? Is there a pattern to the mess it's made of My Documents?

    Read the article

< Previous Page | 548 549 550 551 552 553 554 555 556 557 558 559  | Next Page >