Search Results

Search found 17260 results on 691 pages for 'folder tree'.

Page 592/691 | < Previous Page | 588 589 590 591 592 593 594 595 596 597 598 599  | Next Page >

  • Deleted, then added user w/ same name, now logs on w/ temp profile

    - by labyrinth
    I am a new admin at a high school lab and am trying to spearhead separation of normal IT accounts from IT admin accounts. I made my normal account (e.g. ITuser) and an admin account (e.g. ITuser-adm) on the server (Win Server 2008 R2). I used both accounts on my my main desktop for about a day, but decided I hadn't set up the admin account correctly. I deleted the my admin account, then made a new one with the same name. The problem is that on my main desktop (Windows 7 Pro), whenever I log in with my admin account, it gives the following errors: Windows has backed up this user profile. Windows will automatically try to use the backup profile the next time this user logs on. (Error 1515) Windows cannot find the local profile and is logging you on with a temporary profile. Changes you make to this profile will be lost when you log off. (Error 1511) This is more of a nuisance than anything for me, I just thought I could use the same name for a user account I'd just deleted since they would have separate SSIDs anyway. If it's less trouble, I could just make a new admin account. Or I could just keep using it as is since I don't need to be saving anything locally anyway and the typical folder redirects work fine. I'm just curious and want to understand what's going on. There are no errors listed regarding the registry.

    Read the article

  • Trying to install ffmpeg-php and having installation issues.

    - by dallasclark
    I've installed ffmpeg successfully using the ffmpeginstaller 3 series (http://www.ffmpeginstaller.com/download). ffmpeg is working fine without any known issues with bash. The ffmpeginstaller is meant to install ffmpeg-php but it cannot be found and I receive an error when I execute php -v PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib64/php/modules/ffmpeg.so' - /usr/lib64/php/modules/ffmpeg.so: cannot open shared object file: No such file or directory in Unknown on line 0 Looking at the '/usr/lib64/php/modules/' folder, it doesn't contain the ffmpeg.so file. I've tried to install ffmpeg-php manually but I receive the following error checking for ffmpeg headers... configure: error: ffmpeg headers not found. Make sure you've built ffmpeg as shared libs using the --enable-shared option Should I install ffmpeg with series 4 or 5 of ffmpeginstaller or does someone know how to fix this issue? Thanks in advance ! System Specs cat /etc/redhat-release CentOS release 5.5 (Final) cat /proc/version Linux version 2.6.18-028stab068.5 (root@rhel5-64-build) (gcc version 4.1.2 20070626 (Red Hat 4.1.2-14)) #1 php -v PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib64/php/modules/ffmpeg.so' - /usr/lib64/php/modules/ffmpeg.so: cannot open shared object file: No such file or directory in Unknown on line 0 PHP 5.2.13 (cli) (built: Mar 2 2010 18:08:48) Copyright (c) 1997-2010 The PHP Group Zend Engine v2.2.0, Copyright (c) 1998-2010 Zend Technologies Any other details you need, just let me know.

    Read the article

  • I get a 502 bad gateway ONLY with a specific combination of domain/root folders - NGINX

    - by Patrick De Amorim
    I have a VPS running NGINX and virtual hosts, with a configuration such as this: Domains directing to it: lolpics.no smscloud.no idmag.no Root folders: /home/vds/www/lolpics /home/vds/www/smscloud /home/vds/www/idmag SMSCloud.no is the site that keeps getting 502 errors, but if I make the domain direct to any of the other folders, the site works, or if I make any other domain name direct to the /home/vds/www/smscloud folder, it works. Only smscloud.no with /home/vds/www/smscloud breaks I tried putting this between the http{} in my nginx.conf and no help: proxy_buffer_size 128k; proxy_buffers 4 256k; proxy_busy_buffers_size 256k; EDIT: Well, that was slightly silly, if anyone from Google stumbles on this, here's how I fixed it, I just added this to the http{}: fastcgi_buffer_size 16k; fastcgi_buffers 16 16k; So that the start of my http block is: http { include /etc/nginx/mime.types; proxy_buffer_size 128k; proxy_buffers 4 256k; proxy_busy_buffers_size 256k; fastcgi_buffer_size 16k; fastcgi_buffers 16 16k;

    Read the article

  • Unable to stop chrome.exe *32

    - by chipperyman573
    So I was installing roboform today and was unable to stop the process chrome.exe *32... Even when I uninstalled chrome. This is the error I got: I used lockhunter and it said it was located in %appdata%\Local\Google\Chrome. However, it was unable to unlock, delete or rename. When I use explorer to delete or rename that folder, it says it's being used by Chrome. Even after restarting my computer it still does this. I've tried using the built in chrome task manager (Wrench View Background Pages) and I can't seem to find a process there that has the same amount of memory. I have run many, many virus scans, by: Microsoft security essentials AVG (Free version) Malwarebytes (Pro version) Norton 360 (Pro version) McAfee (Pro Version) Avira (Free version) Avast! Antivirus (Free version) None of which returned with any viruses. Chrome info: Google Chrome 23.0.1271.95 (Official Build 169798) OS Windows 7 Professional WebKit 537.11 (@135931) JavaScript V8 3.13.7.5 Flash 11.5.31.2 User Agent Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.95 Safari/537.11

    Read the article

  • Setting Ubuntu Global PATH for Ruby Enterprise Edition

    - by Wally Glutton
    Context: I recently installed Ruby Enterprise Edition (REE) on an Ubuntu 8.04 server. I would like for this new version of Ruby to globally supersede (for all users, crontabs, etc) the older version in /usr/local/bin. Attempted Solution #1: The REE documentation recommends placing the REE bin folder at the beginning of the global PATH in /etc/environment. I altered the PATH line in this file to read: PATH="/opt/ruby_ee/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games" This did affect my PATH at all. Attempted Solution #2: Next I followed these instructions and updated the PATH setting in the /etc/login.defs and /etc/crontab files. (I did not change /etc/sudoers.) This didn't affect my PATH either, even after logging out and rebooting the server. Other information: I seem to be having the same problem described here. I'm testing using the command: echo $PATH My shell is bash. My .bashrc doesn't not alter my PATH. I'm ssh'ed into the system for all testing. /opt/ruby_ee/ is a sym-link to /opt/ruby-enterprise-1.8.7-2011.03/

    Read the article

  • Is it possible to extract contents of a ThinApp container?

    - by rsk82
    Is there possible to extract such packages made by somebody else. Ok, so if there is no general way of extracting such archives, which can be a huge exe file or tiny exe starter with huge packed *.bin file with main files of an app that is to be run in portable way - is there a way to set an option in compilation *.ini file or other way to make such package able to be extracted. I remember I read somewhere that somebody created a tiny program (in was mentioned in vmware forums as far as I can remember, it was a crude thing coded for private use and I never managed to download it) to sit with main portable application and if such application has an open/save file dialog, it is possible to navigate to that program which virtually sits in the program files alongside main app from within the main app, and such program would scan all files that it can see and somehow is able to distiguish a real file from virtual one, and save all virtual files in a structure that is similar to the initial compilation folder from which portable app was created. I know that it is a very round-about way of doing things, but maybe the only one feasible. Nevertheless any news on this front ? Do antiviruses can somehow unpack these things ? Maybe they must buy a code or license for it from VmWare ? Edit: I found http://communities.vmware.com/thread/257433?tstart=600 and still trying to make sense out of this. Wrong, this was about moving old version of thinapp to win7.

    Read the article

  • Why does Mysql Xampp restart only when i run the mysqld.exe file manually?

    - by Ranjit Kumar
    I am using mysql-xampp v3.0.2 version. while restarting the mysql server first it show me the running status and after 2or3s it stops running automatically. So as of now i got a temporary solution like going into xampp installation folder Xampp-mysql-bin-running the msqld.exe file. i dont know whether it is the correct solution or is there any alternate solution to be made !! please suggest me errorlog 120629 15:29:59 [Note] Plugin 'FEDERATED' is disabled. 120629 15:29:59 InnoDB: The InnoDB memory heap is disabled 120629 15:29:59 InnoDB: Mutexes and rw_locks use Windows interlocked functions 120629 15:29:59 InnoDB: Compressed tables use zlib 1.2.3 120629 15:29:59 InnoDB: Initializing buffer pool, size = 16.0M 120629 15:29:59 InnoDB: Completed initialization of buffer pool InnoDB: The first specified data file D:\xampp\xampp\mysql\data\ibdata1 did not exist: InnoDB: a new database to be created! 120629 15:29:59 InnoDB: Setting file D:\xampp\xampp\mysql\data\ibdata1 size to 10 MB InnoDB: Database physically writes the file full: wait... 120629 15:29:59 InnoDB: Log file D:\xampp\xampp\mysql\data\ib_logfile0 did not exist: new to be created InnoDB: Setting log file D:\xampp\xampp\mysql\data\ib_logfile0 size to 5 MB InnoDB: Database physically writes the file full: wait... 120629 15:30:00 InnoDB: Log file D:\xampp\xampp\mysql\data\ib_logfile1 did not exist: new to be created InnoDB: Setting log file D:\xampp\xampp\mysql\data\ib_logfile1 size to 5 MB InnoDB: Database physically writes the file full: wait... InnoDB: Doublewrite buffer not found: creating new InnoDB: Doublewrite buffer created InnoDB: 127 rollback segment(s) active. InnoDB: Creating foreign key constraint system tables InnoDB: Foreign key constraint system tables created 120629 15:30:02 InnoDB: Waiting for the background threads to start

    Read the article

  • How to back up non-standard directories in my user profile with Windows Backup?

    - by James Johnston
    I'm using Windows Backup to back up my Win7 Pro laptop. I'd like to use it to back up my complete user profile, but I only see standard profile directories (e.g. C:\Users\JohnstonJ\Documents) in the list. Non-standard ones aren't there (e.g. C:\Users\JohnstonJ\MyCustomDirectory). What's the best way to handle this? The only thing I can think of is to browse under the "Computer" entry and navigate directly to C:\Users\JohnstonJ and check off the entire profile (to get what's in there, and any new directories that come up). But is that going to back up the profile twice? Cause other unforeseen problems given that I checked it off by navigating through the computer, rather than picking it under the "Data Files" category? (e.g. back up temporary file garbage, files in use problems, etc. that the "Data Files" category might be handling better). Looking for solutions that other people use that are known to work well and still uses the Windows Backup software - I don't really want to fuss with 3rd-party backup software. Example - as you can see, I have two directories in my profile that Windows Backup is not offering to back up: "Dropbox" and "New folder": (Link to images album because I don't have enough reputation to directly embed them: http://imgur.com/a/Xyv5u)

    Read the article

  • WebDAV through Apache2 permissions/missing files

    - by Strifariz
    I have a WebDAV setup on Apache2 on a server running Debian 5.0 (Lenny), which I am accessing through a mapped network drive under Windows 7. The setup appears to run fine, I receive no permission errors when copying a file to the share the first time, but the file never shows up in the directory (it's invisible, doing a ls -lha on the directory as root on the server also shows no files. When attempting to copy the file once more I am informed that the file already exists though, and I am asked if I wish to overwrite the file, when selecting "Yes" to this, I receive a permission error saying I'm not able to write to the folder. My logs aren't reporting any access violations of any kind, what could be the problem? (See log excerpt below) [17/Jan/2011:10:26:34 +0100] "PUT /1.png HTTP/1.1" 401 525 "-" "Microsoft-WebDAV-MiniRedir/6.1.7600" [17/Jan/2011:10:26:34 +0100] "PUT /1.png HTTP/1.1" 201 304 "-" "Microsoft-WebDAV-MiniRedir/6.1.7600" [17/Jan/2011:10:26:34 +0100] "LOCK /1.png HTTP/1.1" 401 525 "-" "Microsoft-WebDAV-MiniRedir/6.1.7600" [17/Jan/2011:10:26:34 +0100] "LOCK /1.png HTTP/1.1" 200 447 "-" "Microsoft-WebDAV-MiniRedir/6.1.7600" [17/Jan/2011:10:26:34 +0100] "PROPPATCH /1.png HTTP/1.1" 401 525 "-" "Microsoft-WebDAV-MiniRedir/6.1.7600" [17/Jan/2011:10:26:34 +0100] "PROPPATCH /1.png HTTP/1.1" 207 389 "-" "Microsoft-WebDAV-MiniRedir/6.1.7600" [17/Jan/2011:10:26:34 +0100] "HEAD /1.png HTTP/1.1" 401 - "-" "Microsoft-WebDAV-MiniRedir/6.1.7600" [17/Jan/2011:10:26:34 +0100] "HEAD /1.png HTTP/1.1" 200 - "-" "Microsoft-WebDAV-MiniRedir/6.1.7600" [17/Jan/2011:10:26:34 +0100] "PUT /1.png HTTP/1.1" 401 525 "-" "Microsoft-WebDAV-MiniRedir/6.1.7600" [17/Jan/2011:10:26:35 +0100] "PUT /1.png HTTP/1.1" 204 - "-" "Microsoft-WebDAV-MiniRedir/6.1.7600" [17/Jan/2011:10:26:35 +0100] "PROPPATCH /1.png HTTP/1.1" 401 525 "-" "Microsoft-WebDAV-MiniRedir/6.1.7600" [17/Jan/2011:10:26:35 +0100] "PROPPATCH /1.png HTTP/1.1" 207 389 "-" "Microsoft-WebDAV-MiniRedir/6.1.7600" [17/Jan/2011:10:26:35 +0100] "UNLOCK /1.png HTTP/1.1" 401 525 "-" "Microsoft-WebDAV-MiniRedir/6.1.7600" [17/Jan/2011:10:26:35 +0100] "UNLOCK /1.png HTTP/1.1" 204 - "-" "Microsoft-WebDAV-MiniRedir/6.1.7600" [17/Jan/2011:10:26:38 +0100] "PROPFIND / HTTP/1.1" 401 525 "-" "Microsoft-WebDAV-MiniRedir/6.1.7600" [17/Jan/2011:10:26:38 +0100] "PROPFIND / HTTP/1.1" 207 1634 "-" "Microsoft-WebDAV-MiniRedir/6.1.7600"

    Read the article

  • Embedded Spotlight does not function in Outlook 2011

    - by syntaxcollector
    I have a rather strange problem. I manage a network of about 35 Mac, and we all recently switched from Mail.app to Outlook 2011 (Please don't debate this, I've already had this conversation ad nauseum) We are using network home directories (NHD) server from a Windows file server over the SMB protocol. The problem I'm having is Spotlight does not function inside of Outlook. But ONLY inside of Outlook. The global Spotlight can find all email and contacts with Outlook, but the embedded Spotlight cannot. As a test, I took one of my users and switched them from a network home directory to a portable home directory (PHD) (this means the home folder was copied to the local hard drive). This resulted in a working Spotlight within Outlook, as soon as I switched the user back to an NHD, however, it stopped working. I have already tried erasing the Spotlight index and killing the process to force re-indexing. I have exhausted all Spotlight troubleshooting, and since the global is working that is obviously not the issue. I believe it has something to do with the Spotlight plugin Microsoft wrote that is located in /Library/Spotlight. Any ideas?

    Read the article

  • Embed album art in OGG through command line in linux

    - by teratomata
    I want to convert my music from flac to ogg, and currently oggenc does that perfectly except for album art. Metaflac can output album art, however there seems to be no command line tool to embed album art into ogg. MP3Tag and EasyTag are able to do it, and there is a specification for it here which calls for the image to be base64 encoded. However so far I have been unsuccessful in being able to take an image file, converting it to base64 and embedding it into an ogg file. If I take a base64 encoded image from an ogg file that already has the image embedded, I can easily embed it into another image using vorbiscomment: vorbiscomment -l withimage.ogg > textfile vorbiscomment -c textfile noimage.ogg My problem is taking something like a jpeg and converting it to base64. Currently I have: base64 --wrap=0 ./image.jpg Which gives me the image file converted to base64, using vorbiscomment and following the tagging rules, I can embed that into an ogg file like so: echo "METADATA_BLOCK_PICTURE=$(base64 --wrap=0 ./image.jpg)" > ./folder.txt vorbiscomment -c textfile noimage.ogg However this gives me an ogg whose image does not work properly. I noticed when comparing the base64 strings that all properly embedding pictures have a header line but all the base64 strings I generate are lacking this header. Further analysis of the header: od -c header.txt 0000000 \0 \0 \0 003 \0 \0 \0 \n i m a g e / j p 0000020 e g \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 0000040 \0 \0 \0 \0 \0 \0 \0 \0 035 332 0000052 Which follows the spec given above. Notice 003 corresponds to front cover and image/jpeg is the mime type. So finally, my question is, how can I base64 encode a file and generate this header along with it for embedding into an ogg file?

    Read the article

  • Easyphp Web Setup

    - by Dominique
    I've tried to setup an EasyPHP in local and make it visible from the Web via DynDNS, which I've already successed many times before, but now this just doesn't work, maybe I've forgotten something... *The "server" is a common workstation. Here is what I have done : 1) Installed EasyPhp (with a index.php/html file in WWW folder) 2) Changed the port in the config to port 80 3) Forwarded port 80 to the server IP in my router configuration 4) Added the server to the router DMZ *Also tried removing antivirus/firewall I've installed PortListener, pointed it on port 80, and when I access "myname.dyndns.com" it says Client connected GET / HTTP/1.1 Host: xyz.dyndns-remote.com User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.0; fr; rv:1.9.2.12) Gecko/20101026 Firefox/3.6.12 (.NET CLR 3.5.30729) Accept: text/html,application/xhtml+xml,application/xml;q=0.9,/;q=0.8 Accept-Language: fr,fr-fr;q=0.8,en-us;q=0.5,en;q=0.3 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive So the server is accessible via Web, receive the connection successfully, but in my browser it says that the connection failed and show nothing...

    Read the article

  • Cachefilesd (cachefiles) everything seems to be set up, still not working

    - by Evgenius
    I'm trying to set up cachefilesd to work with my network folder shared using NFS. I have seemingly everything set up, however cachefilesd starts normally, however caching isn't functioning. Here is output of commands, which I ran in the same order 1 sudo mount ... cache-1:/mnt/datashared on /mnt/nfsshare type nfs (rw,sync,ac,acregmin=3,acregmax=60,acdirmin=30,acdirmax=300,lookupcache=pos,vers=3,fsc) ... 2 lsbmod | grep cachefiles cachefiles 40555 1 fscache 57430 4 nfs,cifs,cachefiles,nfsv4 3 [edited - deleted] 4 uname -r 3.8.0-34-generic 5 grep CONFIG_NFS_FSCACHE /boot/config-3.8.0-34-generic CONFIG_NFS_FSCACHE=y 6 lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 13.04 Release: 13.04 Codename: raring 7 sudo service cachefilesd restart * Restarting FilesCache daemon cachefilesd [ OK ] 8 dmesg [6211206.141781] FS-Cache: Withdrawing cache "mycache" [6211210.135236] FS-Cache: Cache "mycache" added (type cachefiles) [6211210.135242] CacheFiles: File cache on sdb1 registered [6214644.348929] CacheFiles: File cache on sdb1 unregistering [6214644.348935] FS-Cache: Withdrawing cache "mycache" [6214654.575909] FS-Cache: Cache "mycache" added (type cachefiles) [6214654.575915] CacheFiles: File cache on sdb1 registered 9 ps aux | grep cachefilesd root 65399 0.0 0.0 4460 540 ? Ss 23:14 0:00 /sbin/cachefilesd 1000 65464 0.0 0.0 8160 916 pts/0 S+ 23:16 0:00 grep --color=auto cachefilesd finally, biggest problem is 10 cat /proc/fs/nfsfs/volumes NV SERVER PORT DEV FSID FSC v3 64476645 801 0:24 233e020f0da07a93 no tl;dr I think I configured everything properly but FSC mount option + fscachefilesd don't seem to work

    Read the article

  • Split Excel worksheet into multiple worksheets based on a column with VBA (Redux)

    - by Ceeder
    I'm rather new to VBA and I've been working with the code generously displayed and explained by Nixda: Split Excel Worksheet... My only challenge is I've been trying desperately to find a way to include the top 3 rows as a title bu it seems to only allow for one. Here's the code have: Dim Titlesheet As Worksheet iCol = 23 '### Define your criteria column strOutputFolder = (Sheets("Operations").Range("D4")) '### <--Define your path of output folder Set ws = ThisWorkbook.ActiveSheet Set rngLast = Columns(iCol).Find("*", Cells(3, iCol), , , xlByColumns, xlPrevious) Set Titlesheet = Sheets("Input") ws.Columns(iCol).AdvancedFilter Action:=xlFilterInPlace, Unique:=True Set rngUnique = Range(Cells(4, iCol), rngLast).SpecialCells(xlCellTypeVisible) If Dir(strOutputFolder, vbDirectory) = vbNullString Then MkDir strOutputFolder For Each strItem In rngUnique If strItem < "" Then Sheets("Input").Select Range("A1:V3").Select Selection.Copy ws.UsedRange.AutoFilter Field:=iCol, Criteria1:=strItem.Value Workbooks.Add Sheets("Sheet1").Select ActiveSheet.PasteSpecial ws.UsedRange.SpecialCells(xlCellTypeVisible).Copy Destination:=[A4] strFilename = strOutputFolder & "\" & strItem ActiveWorkbook.SaveAs Filename:=strFilename, FileFormat:=xlWorkbookNormal ActiveWorkbook.Close savechanges:=False End If Next ws.ShowAllData Is there something I can change to include these lines? Thanks so much, this code provided by Nixda has taught me a great deal!

    Read the article

  • nginx + wordpress in /wordpress subdir

    - by nkr1pt
    I installed nginx and would like to setup wordpress as a final step. I followed many howtos but am unable to get it working. The setup is fairly straightforward, the root dir of the webserver is /data/Sites/nkr1pt.homelinux.net. In that root dir I created a symlink to the wordpress folder in /usr/local/wordpress, so in fact all wordpress files can be accessed at /data/Sites/nkr1pt.homelinux.net/wordpress. Permissions are ok. The plan is to get wordpress working at http://sirius/wordpress, the server's name is sirius. spawn-fcgi is running and listening on port 7777. Here you can see the relevant config: server { listen 80; listen 8080; server_name sirius; root /data/Sites/nkr1pt.homelinux.net; passenger_enabled on; passenger_base_uri /redmine; #charset koi8-r; #access_log logs/access.log main; location ^~ /data { root /data/Sites/nkr1pt.homelinux.net; autoindex on; auth_basic "Restricted"; auth_basic_user_file htpasswd; } location ^~ /dump { root /data/Sites/nkr1pt.homelinux.net; autoindex on; } location ^~ /wordpress { try_files $uri $uri/ /wordpress/index.php; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:7777 location ~ \.php$ { #fastcgi_split_path_info ^(/wordpress)(/.*)$; fastcgi_pass localhost:7777; #fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; #index index.php; } please note that redmine, and the locations dump and data are working perfectly, it is only wordpress that I cannot get to work. Can you please help me to the correct wordpress configuration in nginx? All help is much appreciated!

    Read the article

  • nginx 301 redirect to subfolder on primary domain

    - by 187j3x1
    hello there, sorry for my poor english. i just set up wordpress on my vps, so far its the only item on my site. there for seo reason, i think is better redirect all primary domain to the blog folder. primary domain is example.com wordpress is at example.com/blog what i want is rewrite www.example.com and example.com to example.com/blog. googled got some scripts, and make some change paste into nginx config file. here is: #301 redirect www to non-www server { server_name www.example.com; location = / { rewrite ^/(.*) http://example.com/$1 permanent; } } #301 non-www to subfolder server { server_name example.com; location = / { rewrite ^/(.*) http://example.com/blog$1 permanent; } } it works at some degree, successfully redirect to example.com/blog. the only problem is i get 404 not found error. then i only make nginx redirect www to example.com/blog. ok, this time i can access blog page. i know there is something wrong in the non-www to subfolder script. but do not how to fix it :(

    Read the article

  • SQL Server Issue: Could not allocate space for object ... primary filegroup is full

    - by Luke
    Trying to figure out a problem at an office that has SQL Server 2005 installed on Windows SBS Server 2008. Here's the setup: It's an office, and the person who set this all up is nowhere to be found. I'm the best hope they have... One of the programs they use on a workstation gives them an error of "Could not allocate space for object 'Billing' in database "MyDatabase" because primary filegroup is full" when trying to save an entry in their software. I searched around for hours, looking for possible solutions. One was to check for available disk space, and another was to defrag. I checked the hard drives on the server, and there is plenty of space free. I also defragged, which may have helped the problem somewhat. It's hard to say, because it seems like with the nature of the error, if you try over and over you might get it to actually save. My next step was to try to see if autogrowth was enabled on the database. This would seem to be a likely / possible solution, but I can't access the database! If I run the SQL Management Studio, I can log in as my Windows user and view the list of databases. However, if I try to do anything (actually view the database, view the properties, add or edit users), I get errors that I don't have permission. For what it's worth, I also tried runing Management Studio as Administrator, in case that would help. No difference, though. Now, what I'm guessing is going on -- from my limited knowledge of SQL and from reading online -- is that though I'm logged in as a Windows administrator, that account does NOT have SQL access. I do see a list of SQL users, including SA, but I again don't have permission to add one or to change the password on an existing one. And nobody at the office has any idea what the SQL passwords could be. So... here's my thinking thus far: 1 - The "Could not allocate" error likely points to a database that needs to be allowed to autogrow. Especially since I verified there is plenty of free space and the HD has been defragmented. 2 - Enabling autogrow would be very easy to do if I had the proper access within SQL Management Stuido. That leads me to this link: http://blogs.technet.com/b/sqlman/archive/2011/06/14/tips-amp-tricks-you-have-lost-access-to-sql-server-now-what.aspx It sounds like it's a step-by-step guide for giving me the access I need to SQL. I'm guessing that if I followed this guide, I would be able to then log in to the SQL server via Management Studio with the proper permissions, and would be able to enable autogrow (or simply view the status of the existing database), and hopefully solve the "Could not allocate space" problem! So I guess I have a few questions: 1 - Would you guys agree with my "diagnosis"? Think I'm barking up the right tree? 2 - Is there any risk at all in hurting / disabling / wrecking the current SQL database or setup with me going through the guide to regain SQL access? I understand that per the guide, I would have to temporarily shut down SQL, so obviously it wouldn't be accessible during that time. But it wouldn't be worth the risk if there's a chance I could mess anything up... Like I said, the workstations ARE currently accessing the database somehow, but nobody knows with what login info or anything. Basically, it's set up, it works (usually), but if they had to reload the software, nobody would know how. Any feedback would be appreciated!! The problem is such that it's not an emergency for them, but an annoyance. If I could fix it, it would be wonderful. But if not, I think they'll manage, especially as they are going to eventually stop using this software. Thank you so much for your time! Luke

    Read the article

  • Pure-FTPD accounts and permissions for websites

    - by EddyR
    I'm having trouble setting up the appropriate Pure-FTPD accounts and permissions - I have the following sites setup up on my Debian server. /var/www/site1 /var/www/site2 /var/www/wordpress The permissions are 775 for folders and 664 for files. The owner is currently admin:ftpgroup Wordpress also requires special permissions for file uploads in /var/www/wordpress/wp-content/uploads What I need is: a general admin group with access to /var/www a group for each site (site1, site2, wordpress) and a group or user, not www-data (?), with permissions to write files to the wordpress upload folder I ask because restrictions on linux groups (can't have groups in groups) makes it a little bit confusing and also because many of the tutorial sites have conflicting information like, some recommend the use of www-data and some don't. Also, I'm not sure if I understand how Pure-FTP is supposed to work exactly. I create a Pure-FTPD account and assign it a directory (/var/www) and a system user (ftpuser) and group (ftpgroup): Can I assign more than 1 path? For example, if a user requires access to 2 sites. Is it better to assign ftpgroup to all ftp locations and let Pure-FTPD manage account access? Why would anyone have more than 1 ftpuser or ftpgroup? (Doesn't it mean users have access to everyone else's files if they could get there?) Sorry for so many questions at once. I've been reading lots of tutorials but I think they've ended up making me more confused!

    Read the article

  • Broken cups installation on a ubuntu server 64

    - by user67046
    Hi, I am having trouble with an cups installation. It seems to be in a broken state. When i try to reinstall it it stalls, the same if i try to remove it completely. I am running the server version 64 bit of Ubuntu 10.10 with kernel Linux version 2.6.35-22-server. When i try to start the cups daemon with the following command sudo service cups start It just stays there and nothing happens. I have tried to remove it, to be able to reinstall it, with the following command sudo apt-get purge cups It finally stalls with the following message Removing cups ... After that nothing happens. The process tree for the apt-get command looks like this. 1404 1404 1404 ? 00:00:00 sshd 26495 26495 26495 ? 00:00:00 sshd 26581 26495 26495 ? 00:00:00 sshd 26582 26582 26582 pts/4 00:00:00 bash 27158 27158 26582 pts/4 00:00:00 apt-get 27172 27172 27172 pts/2 00:00:00 dpkg 27176 27172 27172 pts/2 00:00:00 cups.prerm 27178 27172 27172 pts/2 00:00:00 stop I have tried to leave the process running for a while to see if i get any error messages but without success. To get out of it I have to kill the processes. sudo dpkg --configure cups dpkg: error processing cups (--configure): package cups is already installed and configured Errors were encountered while processing: cups sudo dpkg --status cups Package: cups Status: purge ok installed Priority: optional Section: net Installed-Size: 8292 Maintainer: Ubuntu Developers <[email protected]> Architecture: amd64 Version: 1.4.4-6ubuntu2.3 Replaces: cupsddk-drivers (<< 1.4.0) Provides: cupsddk-drivers Depends: libavahi-client3 (>= 0.6.16), libavahi-common3 (>= 0.6.16), libc6 (>= 2.7), libcups2 (>= 1.4.4-3~), libcupscgi1 (>= 1.4.2), libcupsdriver1 (>= 1.4.0), libcupsimage2 (>= 1.4.0), libcupsmime1 (>= 1.4.0), libcupsppdc1 (>= 1.4.0), libdbus-1-3 (>= 1.0.2), libgcc1 (>= 1:4.1.1), libgnutls26 (>= 2.7.14-0), libgssapi-krb5-2 (>= 1.8+dfsg), libijs-0.35, libkrb5-3 (>= 1.6.dfsg.2), libldap-2.4-2 (>= 2.4.7), libpam0g (>= 0.99.7.1), libpaper1, libpoppler7, libslp1, libstdc++6 (>= 4.1.1), libusb-0.1-4 (>= 2:0.1.12), zlib1g (>= 1:1.1.4), debconf (>= 1.2.9) | debconf-2.0, upstart-job, poppler-utils (>= 0.12), procps, ghostscript, lsb-base (>= 3), cups-common (>= 1.4.4), cups-client (>= 1.4.4-6ubuntu2.3), ssl-cert (>= 1.0.11), adduser, bc, ttf-freefont, cups-ppdc Recommends: foomatic-filters (>= 4.0), cups-driver-gutenprint, ghostscript-cups Suggests: cups-bsd, foomatic-db-compressed-ppds | foomatic-db, hplip, xpdf-korean | xpdf-japanese | xpdf-chinese-traditional | xpdf-chinese-simplified, cups-pdf, smbclient (>= 3.0.9), udev Breaks: foomatic-filters (<< 4.0) Conflicts: cupsddk-drivers (<< 1.4.0) Conffiles: /etc/fonts/conf.d/99pdftoopvp.conf a5221cfad70a981c80864229ef56586d /etc/logrotate.d/cups 5bb41fa9900f0d1c565954405a2bd7c4 /etc/default/cups 2b436fbb1a32b82b6aba45a76a1d7e40 /etc/pam.d/cups ff2488324854f7b1e892bb0df062d5f0 /etc/init/cups.conf 1a3cd022e8474e3d2b44640f33ce68e3 /etc/ufw/applications.d/cups 29e98a6d850da251e180c3d68dec2bd3 /etc/apparmor.d/usr.sbin.cupsd 60c4b26bfd5c033baa3dd48a3b2e9911 /etc/cups/cupsd.conf e2c7ec15835ea0939e5e86f7c6efcc03 /etc/cups/snmp.conf 2326a8af1e112676d55245bc5eb459ca /etc/cups/cupsd.conf.default a68d54d76021e857dd1d64edf57d36c5 Description: Common UNIX Printing System(tm) - server The Common UNIX Printing System (or CUPS(tm)) is a printing system and general replacement for lpd and the like. It supports the Internet Printing Protocol (IPP), and has its own filtering driver model for handling various document types. . This package provides the CUPS scheduler/daemon and related files. Original-Maintainer: Debian CUPS Maintainers <[email protected]> Would be greatful if someone could provide some help on how to solve this issue.

    Read the article

  • Apache2 refuses to process php files - "Snow Leopard" OSX 10.6.4

    - by w-01
    I have a macbook pro i5. my understanding is that by default it should be able to serve php5. i have uncommented the relevant line in /etc/apache2/httpd.conf LoadModule php5_module libexec/apache2/libphp5.so I have restarted apache with sudo apachectl -k restart and when i try to access a file with a php extension, Apache prompts me to download the file. i.e. instead of processing the php and sending me html, it thinks i want to download the file.... when i look in apache error log i see this [Fri Nov 12 10:16:14 2010] [notice] Apache/2.2.14 (Unix) PHP/5.3.2 mod_ssl/2.2.14 OpenSSL/0.9.8l DAV/2 mod_wsgi/3.2 Python/2.6.1 configured -- resuming normal operations so it looks like php5 is loading properly. I'd like to know either: How do i fix this? or How do I reinstall apache2 so that it's like i just installed the os? thanks in advance update @Zayne - the end of my httpd.conf has Include /private/etc/apache2/other/*.conf and i have a file /etc/apache2/other/php.conf with the contents <IfModule php5_module> AddType application/x-httpd-php .php AddType application/x-httpd-php-source .phps <IfModule dir_module> DirectoryIndex index.html index.php </IfModule> </IfModule> @Zayne I've already copied php.ini.default to php.ini in the same folder. when i run sudo apachectl configtest i get /usr/sbin/apachectl: line 82: ulimit: open files: cannot modify limit: Invalid argument httpd: Could not reliably determine the server's fully qualified domain name, using ::1 for ServerName Syntax OK furthermore i decided to try apachectl -M which shows all loaded modules Most importantly in the list of loaded modules i got Loaded Modules: php5_module (shared) Since the module is being loaded, it seems like the issue has more to do with making apache use php engine to process the php files.... so something wrong with the ifmodule directive?

    Read the article

  • How to Creat custom content for nginx error 502 page, keep origin url on browser

    - by user123862
    i'm trying to get custom language and message for nginx error page but keep url on browser.. not success for eg: i go to url : xaluan.com/aaa/bbb.html on the time server down.. nginx will show error 502. with the same url but custom message as my language. test 1. I created a custom page at /usr/local/nginx/html/205.html as following config but it show on web site when error is default nginx error at domain.com/50.html ( the content of webpage not same as i created) error_page 502 /502.html; location = /502.html { root /usr/local/nginx/html; } test 2. Then i create same page at my www domain folder /home/xaluano/public_html/502.html but this keep redirect me to root domain.com/502.html the content now same as i created. but.. the url still not as i need error_page 502 /502.html; location = /502.html { root /home/xaluano/public_html; internal; } EDIT UPDATE for more detail 10/06/2012 please download my nginx config http://pastebin.com/7iLD6WQq and vhost config following: http://pastebin.com/ZZ91KiY6 == the case test.. if apache httpd service stop: #service httpd stop then open browser go to: xaluan.com/modules.php?name=News&file=article&sid=123456 I will see the 502 error with the same url on browser address == Custome error page I need the config which help when apache fail .. will show the custom message tell user wail for 1 minute for service back then refress current page with same url ( refresh I can do easy by javascript ), Nginx dosent change url so java-script can work out. any help will be great.. thank in advance

    Read the article

  • How to upgrade Apache 2 from 2.2 to 2.4

    - by Nina
    I was in the process of doing a test upgrade from Apache 2.2 to 2.4.3. I'm using Ubuntu 10.04. I would have upgraded to 12.04 for this to see if the upgrade would go a lot smoother. Unfortunately, I was told it wasn't an option...so I'm stuck using 10.04. The process I did this was: Before attempting this, I have managed to upgrade APR from 1.3 to 1.4 as well since apache told me it was a requirement beforehand: http://apr.apache.org/download.cgi First remove all traces of the current apache: sudo apt-get --purge remove apache2 sudo apt-get remove apache2-common apache2-utils apache2.2-bin apache2-common sudo apt-get autoremove whereis apache2 sudo rm -Rf /etc/apache2 /usr/lib/apache2 /usr/include/apache2 Afterwards, I did the following: sudo apt-get install build-essential sudo apt-get build-dep apache2 Then install apache 2.4 with the following: wget http://apache.mirrors.tds.net//httpd/httpd-2.4.3.tar.gz tar -xzvf httpd-2.4.3.tar.gz && cd httpd-2.4.3 sudo ./configure --prefix=/usr/local/apache2 --with-apr=/usr/local/apr --enable-mods-shared=all --enable-deflate --enable-proxy --enable-proxy-balancer --enable-proxy-http --with-mpm=prefork sudo make sudo make install After the make install, I ended up getting a series of errors that prevented it from installing correctly: exports.c:2513: error: redefinition of 'ap_hack_apr_uid_current' exports.c:1838: note: previous definition of 'ap_hack_apr_uid_current' was here exports.c:2514: error: redefinition of 'ap_hack_apr_uid_name_get' exports.c:1839: note: previous definition of 'ap_hack_apr_uid_name_get' was here exports.c:2515: error: redefinition of 'ap_hack_apr_uid_get' exports.c:1840: note: previous definition of 'ap_hack_apr_uid_get' was here exports.c:2516: error: redefinition of 'ap_hack_apr_uid_homepath_get' Looking for exports.c only leads me back to the httpd-2.4.3 folder. So I'm not sure what these errors mean... Thanks in advance for any help you have to offer!

    Read the article

  • Broken UAC, cant edit File/folders or change settings in user account

    - by Antoros
    It appears that UAC is broken cant move/delete some file/folders, when asked to use administrative right (and say yes) it shows the loading bar and then nothings gets moved/deleted, no error whatsoever opened control panel, user account and when i click on any option with the Shield (administrative rights) the mouse changes to loading and thengo back to normal, not opening any menu or showing any error Already done a sfc /scannow , no errors found Already used Microsoft's Fix it, reclycle bin broken and repaired, still the same error Used microsoftaccounts tool , this are the errors i got: Problem with microsoft account policy... <- this is the problem (didint fix) Trust this PC <- loop of redirections, cant get to trust this pc (only one with w8) Problems witht sytem registration : i think it is because of the soft system reset Some setting have sync turned off : i never configured anything to sync Rootcauses found and created logs : would like to know where the logs are saved... had to use a ".reg" file to change uac setting to never notify, thinking it would fix this, no, it stoped asking though, i can still open a cmd with Administrative rights, but cant access to UAC settings Accesed Administrator account (net user administrator /active:yes)and even with that account could change any settings so there it is, dont know what else to do in the moment (this pc broke with 8.1 update and was restored to factory configuration, it broke several drivers and kept most of the registry entries, i cant find the cause of this problem. other info: i tried to delete a file in a program folder and couldnt, downloaded unlocker to check first if it was permissions but no, it showed me a msg telling me that there wasnt any error, and if i would like to delete it, clicked on yes, and it did delete it, what amuses me is that i cant without this tool, not even using the feature that takes over ownership Edit: wow, in a not crappy pc chkdsk is fast, completed with no errors found : /

    Read the article

  • Mystery undeletable file

    - by Hugh Allen
    I can't delete C:\Config.Msi\75ce84f.rbf. it's not readonly, system or hidden it's not in use by another process (according to Process Explorer) the NT security permissions aren't the problem either - I am the owner and have Full Control ; as a double-check, the Effective Permissions tab shows that I have permission to delete. Yet trying to delete the file gives "Access is Denied" from both Explorer and cmd. I can however rename it or move it to another folder on the same drive. I can also read it and Virustotal says it's clean which is what I would expect (it's just a Windows Installer temp file - a copy of some DLL I think). The relevant line from Process Monitor is: 6:52:14.3726983 PM 112 Explorer.EXE SetDispositionInformationFile C:\Config.Msi\75ce84f.rbf CANNOT DELETE Delete: True Write 1232 Background: I'm using XP SP2. I recently repaired my Adobe Reader installation to make it the default browser plugin again instead of Foxit. (there seems to be no UI to do it otherwise?) So the installer did its thing and then asked to reboot. As is my habit when rebooting is inconvenient I declined the offer and ran pendmoves to find out what files the installer had scheduled to move / delete. It wanted to delete two files with .rbf extension (rollback files) located in C:\Config.msi\. (this applies to both even though I've been speaking about one). So I tried to delete them manually and couldn't. Does anyone have any ideas what could be preventing deletion? (and I don't think it's malware even though I'm not running AV at the moment)

    Read the article

  • install python2.7.3 + numpy + scipy + matplotlib + scikits.statsmodels + pandas0.7.3 correctly

    - by boldnik
    ...using Linux (xubuntu). How to install python2.7.3 + numpy + scipy + matplotlib + scikits.statsmodels + pandas0.7.3 correctly ? My final aim is to have them working. The problem: ~$ python --version Python 2.7.3 so i already have a system-default 2.7.3, which is good! ~$ dpkg -s python-numpy Package: python-numpy Status: install ok installed and i already have numpy installed! great! But... ~$ python Python 2.7.3 (default, Oct 23 2012, 01:07:38) [GCC 4.6.1] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy as nmp Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: No module named numpy this module couldn't be find by python. The same with scipy, matplotlib. Why? ~$ sudo apt-get install python-numpy [...] Reading package lists... Done Building dependency tree Reading state information... Done python-numpy is already the newest version. [...] why it does not see numpy and others ? update: >>> import sys >>> print sys.path ['', '/usr/local/lib/python27.zip', '/usr/local/lib/python2.7', '/usr/local/lib/python2.7/plat-linux2', '/usr/local/lib/python2.7/lib-tk', '/usr/local/lib/python2.7/lib-old', '/usr/local/lib/python2.7/lib-dynload', '/usr/local/lib/python2.7/site-packages'] >>> so i do have /usr/local/lib/python2.7 ~$ pip freeze Warning: cannot find svn location for distribute==0.6.16dev-r0 BzrTools==2.4.0 CDApplet==1.0 [...] matplotlib==1.0.1 mutagen==1.19 numpy==1.5.1 [...] pandas==0.7.3 papyon==0.5.5 [...] pytz==2012g pyxdg==0.19 reportlab==2.5 scikits.statsmodels==0.3.1 scipy==0.11.0 [...] zope.interface==3.6.1 as you can see, those modules are already installed! But! ls -la /usr/local/lib/ gives ONLY python2.7 dir. And still ~$ python -V Python 2.7.3 and import sys sys.version '2.7.3 (default, Oct 23 2012, 01:07:38) \n[GCC 4.6.1]' updated: Probably I've missed another instance... One at /usr/Python-2.7.3/ and second (seems to be installed "by hands" far far ago) at /usr/python2.7.3/Python-2.7.3/ But how two identical versions can work at the same time??? Probably, one of them is "disabled" (not used by any program, but I don't know how to check if any program uses it). ~$ ls -la /usr/bin/python* lrwxrwxrwx 1 root root 9 2011-11-01 11:11 /usr/bin/python -> python2.7 -rwxr-xr-x 1 root root 2476800 2012-09-28 19:48 /usr/bin/python2.6 -rwxr-xr-x 1 root root 1452 2012-09-28 19:45 /usr/bin/python2.6-config -rwxr-xr-x 1 root root 2586060 2012-07-21 01:42 /usr/bin/python2.7 -rwxr-xr-x 1 root root 1652 2012-07-21 01:40 /usr/bin/python2.7-config lrwxrwxrwx 1 root root 9 2011-10-05 23:53 /usr/bin/python3 -> python3.2 lrwxrwxrwx 1 root root 11 2011-09-06 02:04 /usr/bin/python3.2 -> python3.2mu -rwxr-xr-x 1 root root 2852896 2011-09-06 02:04 /usr/bin/python3.2mu lrwxrwxrwx 1 root root 16 2011-10-08 19:50 /usr/bin/python-config -> python2.7-config there is a symlink python-python2.7, maybe I can ln -f -s this link to exact /usr/Python-2.7.3/python destination without harm ?? And how correctly to remove the 'copy' of 2.7.3?

    Read the article

< Previous Page | 588 589 590 591 592 593 594 595 596 597 598 599  | Next Page >