Search Results

Search found 11618 results on 465 pages for 'shared storage'.

Page 376/465 | < Previous Page | 372 373 374 375 376 377 378 379 380 381 382 383  | Next Page >

  • Converting flv and mp4 video format to '.ogg' using FFmpeg

    - by user163906
    I have HostGator VPS server with FFmpeg installed. It allows me to convert .wmv to .flv as well as .mp4 files successfully using the following commands for flv and mp4: ffmpeg -i WantsABath.wmv -b 600k -r 24 -ar 22050 -ab 96k WantsABath.flv ffmpeg -i WantsABath.wmv WantsABath.mp4 but it won't allow me to convert any file format to .ogg. I tried using the command: ffmpeg -i input.mp4 -acodec libvorbis -vcodec libtheora -f ogv output.ogv by mondain but no luck with it. I am doubting that my VPS doesn't have libtheora installed. I tried configuring it by using SSH but I don't know how to make sure if it is installed or not. I tried checking with php_info but can't find anything regarding libtheora. Here's my FFmpeg version: FFmpeg version SVN-r19795, Copyright (c) 2000-2009 Fabrice Bellard, et al. configuration: --enable-libmp3lame --enable-libvorbis --disable-mmx --enable-shared --prefix=/usr/ --enable-gpl libavutil 50. 3. 0 / 50. 3. 0 libavcodec 52.35. 0 / 52.35. 0 libavformat 52.38. 0 / 52.38. 0 libavdevice 52. 2. 0 / 52. 2. 0 libswscale 0. 7. 1 / 0. 7. 1 This details doean't show libtheor Can anyone please suggest me something?

    Read the article

  • Forcing programs to be installed to another drive

    - by zyboxenterprises
    I have an SSD as my main Windows drive, with a 640GB 2.5" HDD, partitioned to store programs and user settings, and also to act as backup (it's the only thing I had lying around at the time of building my PC). The task was to make the PC as fast as possible, while having an increased storage capacity available to store normal user data, and to assist in my small data recovery business. The problem is that whenever I install a program, it installs to C:\Program Files [(x86 for the 32 bit programs]\, although I have changed the environment variables. This wouldn't normally be an issue, however every installation program points its shortcut to my 640GB HDD. The root layout of both drives: To clarify: Program files get installed to C:\ Program shortcuts are always pointed to Z:\, my 640GB HDD Modifying the relevant environment variables doesn't do anything, I looked at this, but however it only talks about modifying the registry and environment variables, which I have already done so. I install to the Z:\ drive if the installation program lets me change the installation path, but however the installation programs sometimes don't let me change this. Is there a way that I can force every program to install to the relevant location on Z:\? Perhaps I'm missing something here? Edit: Found this program; would it be appropriate to use in my case? I would be able to move the entire Program Files (and its x86 version) to Z:\, without impacting on the performance.

    Read the article

  • Installing Linux on a Windows 8.1 laptop

    - by nicoX
    I would like to clean install a linux distribution as Ubuntu etc. My laptop that runs Windows 8.1. I have two options in mind. Clean install or dual boot. My technical question is: my laptop have a 8GB SSD drive, which it uses to boot Windows with and a 500GB for storage. I wonder what that 8GB SSD stores? It can't store the whole Windows install as that would be much more than 8GB. Also if I would do a clean install of Ubuntu could I use the 8GB SSD to have Ubuntu boot up quicker. How would I install it. Option two, if I would like to dual boot, how would I proceed having the SSD to boot both systems? I also wish to ask about the Legacy and UEFI differences. Windows runs with UEFI. So when I'm installing Linux, should I run Legacy, and if I dual boot, what option to I choose?

    Read the article

  • How can I avoid permission denied errors when attempting to deploy a rails app with capistrano?

    - by joshee
    Total noob here. I'm attempting to deploy an app through Capistrano. I'm getting relentless permission denied errors when I attempt to run cap deploy:update. Seemingly at least some of these errors are due to missing directories that trigger a "Permission Denied" error. (I'm doing setup on root just temporarily.) set :user, 'root' set :domain, 'domainname.com' set :application, 'appname' # adjust if you are using RVM, remove if you are not $:.unshift(File.expand_path('./lib', ENV['rvm_path'])) require "rvm/capistrano" set :rvm_ruby_string, '1.9.2' # file paths set :repository, "ssh://[email protected]/~/git/appname.git" set :deploy_to, "/var/rails/appname" # distribute your applications across servers (the instructions below put them # all on the same server, defined above as 'domain', adjust as necessary) role :app, domain role :web, domain role :db, domain, :primary => true set :deploy_via, :remote_cache set :scm, 'git' set :branch, 'master' set :scm_verbose, true set :use_sudo, false set :rails_env, :production namespace :deploy do desc "cause Passenger to initiate a restart" task :restart do run "touch #{current_path}/tmp/restart.txt" end desc "reload the database with seed data" task :seed do run "cd #{current_path}; rake db:seed RAILS_ENV=#{rails_env}" end end after "deploy:update_code", :bundle_install desc "install the necessary prerequisites" task :bundle_install, :roles => :app do run "cd #{release_path} && bundle install" end Here's my result: ** [domainname.com :: out] Cloning into '/var/rails/appname/shared/cached-copy'... ** [domainname.com :: err] Permission denied, please try again. ** [domainname.com :: err] Permission denied, please try again. ** [domainname.com :: err] Permission denied (publickey,gssapi-with-mic,password). ** [domainname.com :: err] fatal: The remote end hung up unexpectedly I'm able to ssh without a password, so not sure about that publickey error. By the way, if I run cap deploy:update without set :deploy_via, :remote_cache, here's my result: ** [domainname.com :: out] Cloning into '/var/rails/appname/releases/20120326204237'... ** [domainname.com :: err] Permission denied, please try again. ** [domainname.com :: err] Permission denied, please try again. ** [domainname.com :: err] Permission denied (publickey,gssapi-with-mic,password). ** [domainname.com :: err] fatal: The remote end hung up unexpectedly command finished Thanks a lot for your help with this.

    Read the article

  • "iostat" command different in two equal machines

    - by Oz.
    We have several machines on Amazon (ec2) of the type c1.xlarge with 8 cpus, running the Amazon AMI. Details on the machine: 7 GB of memory 20 EC2 Compute Units (8 virtual cores with 2.5 EC2 Compute Units each) 1690 GB of instance storage 64-bit platform I/O Performance: High API name: c1.xlarge One out of the several machines is showing a high load average, since we have run the last yum upgrade a couple of weeks a go. We did not yet update the other machines, and everything looks normal on them. The strange thing is that the top command not showing any hint for the cause of the load. CPUs are - 4.8%us, 1.1%sy, 0.0%ni, 94.1%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st. Mem is about 1.5GB free. Any idea what could it be, or where else can we check? iostat command on the proper machine: avg-cpu: %user %nice %system %iowait %steal %idle 8.97 0.03 4.46 0.19 0.14 86.23 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn xvdap1 1.60 0.69 55.38 587620 47254184 xvdfp2 2.64 1.10 61.04 934786 52091056 xvdfp4 0.86 0.19 41.72 163866 35601920 xvdfp1 4.37 36.59 73.89 31220810 63051504 xvdfp3 8.03 7.08 94.63 6045402 80749184 iostat command on problematic machine: avg-cpu: %user %nice %system %iowait %steal %idle 9.29 0.04 5.55 0.26 0.11 84.74 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn xvdap1 2.13 3.34 68.85 246244 5077888 xvdfp1 7.60 74.31 104.88 5480362 7734840 xvdfp3 13.22 73.67 125.00 5433386 9218600 xvdfp4 1.11 0.76 65.08 55762 4799248 xvdfp2 4.16 3.31 99.17 243818 7313264 Many thanks for the help.

    Read the article

  • Chrome pop-up blocker fail?

    - by Count Zero
    I have this problem since just a few days. I've been a heavy Chrome user for several years, but it never happened before: Sometimes absolutely uncalled for pop-ups appear, when I click something absolutely legit. It seems to happen at a very precise rate of two pop-ups every hour or so. The pop-ups are not very varied, it seems to be a fixed set of some 4-5 ads. At first I thought I caught some malware, but after a full scan with Malwarebytes Anti-Malware, I found absolutely nothing. In addition the problem exists only with Chrome. When I use Firefox, this never ever happens. What is even more puzzling is that it happens on both of my rigs and started on the same day. I have a removable storage that I dock to both of them and access one machine from the other via RDP, but still... It seems this is a problem that others face too. See this Google forum and here. Could it be that the latest Chrome build is just acting up? (Just in case it matters: I run Chrome version 24.0.1312.57 m on Win 7.)

    Read the article

  • netlogon errors

    - by rorr
    I have two instances of mssql 2005 and am using CA XOSoft replication. The master is a failover cluster and the replica is a standalone server. They are all running Server 2003 sp2 x64. Same patch levels on all servers. This setup has worked great for several months until we recently restricted the RPC ports on both nodes of the master(5000 - 6000 using rpccfg.exe). We have to implement egress filtering, thus the limiting of the ports. We began receiving login errors for sql windows authentication and NETLOGON Event ID: 5719: This computer was not able to set up a secure session with a domain controller in domain due to the following: Not enough storage is available to process this command. This may lead to authentication problems. Make sure that this computer is connected to the network. If the problem persists, please contact your domain administrator. We also see group policies failing to update and cluster file shares go offline at the same time. The RPC ports were set back to default when we started seeing these problems and the servers rebooted, but the problems persist. The domain controllers are not showing any errors. Running dcdiag and netdiag shows everything is fine. We have noticed that the XOSoft service ws_rep.exe is using a lot of handles(8 - 9k), about the same number that sqlserver is using. As soon as xosoft replication is stopped the login errors cease and everything functions correctly. I have opened a ticket with CA for XOSoft, but I'm not sure that the problem is actually xosoft, but that it is the one bringing the problem to light. I'm looking for tips on debugging RPC problems. Specifically on limiting the ports and then reverting the changes.

    Read the article

  • How to Set up MySQL Server to utilize more memory

    - by Cyril Gupta
    Hi there, I have MySQL setup on Windows along with Plesk. The version is 5.0.45 Community. The databases I have on the server are MyISAM as well as InnoDb, but predominantly innodb. I had 8G memory on my server, but MySQL isn't going up more than 1.3G and tweaking the settings isn't helping. I tried to increase the memory allocation for innodb_buffer_pool_size, it works if I set it up to 1G, but if I set 2G, or above the server doesn't come back online! I want mySQL to use at least 5-6 Gigs of the memory I have for performance, but I can't get this to work. Can anyone please help? My mysql config file is below (there are 2 mysqld sections... when i used MySQL workbench it created another one!) [MySQLD] port=3306 basedir=C:\\Program Files (x86)\\Parallels\\Plesk\\Databases\\MySQL datadir=C:\\Program Files (x86)\\Parallels\\Plesk\\Databases\\MySQL\\Data default-character-set=latin1 default-storage-engine=INNODB query_cache_size=128M table_cache=1024 tmp_table_size=32M thread_cache=32 myisam_max_sort_file_size=100G myisam_max_extra_sort_file_size=100G myisam_sort_buffer_size=2M key_buffer_size=32M read_buffer_size=16M read_rnd_buffer_size=2M sort_buffer_size=8M innodb_additional_mem_pool_size=24M innodb_flush_log_at_trx_commit=1 innodb_log_buffer_size=10M innodb_buffer_pool_size=1G innodb_log_file_size=10M innodb_thread_concurrency=8 max_connections=700 key_buffer=48M max_allowed_packet=5M sort_buffer=2M net_buffer_length=4K old_passwords=1 wait_timeout=20 connect_timeout=60 [client] port=3306 [mysqld] query_cache_min_res_unit = 4096 innodb_additional_mem_pool_size = 1048576 innodb_buffer_pool_size = 1G query_cache_limit = 1048576 key_buffer_size = 8388608 sort_buffer_size = 2097144 query_cache_type = 1 query_cache_size = 312M log-slow-queries connect_timeout = 5 wait_timeout = 20 thread_cache_size = 15 read_buffer_size = 131072 table_cache = 64

    Read the article

  • VirtualBox: using physical partition as virtual drive

    - by Hamman Samuel
    Background: I am using VirtualBox installed on Windows 7. From within VirtualBox I am using Xubuntu as a virtual OS. The reason I chose this approach is so that I don't have to keep turning off Windows and rebooting from Xubuntu every time I needed to switch OSes. And VirtualBox's seamless mode is pretty amazing to allow me see Xubuntu and Windows 7 all in one screen. Issue: Now I am thinking of a way to have Xubuntu more integrated into my system. By this I mean I want to have a physical partition for Xubuntu. But I want to still have the feeling of the seamless mode. Question: So finally, my question is: is it possible to load a partition in VirtualBox as a virtual OS? Case examples: Ideal scenario would be: I physically boot up and login to Windows 7. Now I want to access Xubuntu, so I load VirtualBox and access my Xubuntu partition without rebooting. And the other way around too, i.e. I boot up the system, login to Xubuntu, and can access the actual Windows 7 partition through VirtualBox. Other info: Please note that I am not talking about getting access to files, as I have a completely separate partition for my files, and am very familiar with VirtualBox's Shared Folders option.

    Read the article

  • should i and how do i backup my database for a webapp that is hosted on amazon ec2 server?

    - by user8184
    I set up an amazon ec2 instance using ubuntu server edition. I install LAMP stack on it. I did up a php web app running on mysql. I have not officially launched, but I need to know this before launching. Should I backup my database data? If so, how should I do it as cost effective as possible? Previously for another web app, i wrote a perl or bash script (cannot remember) that will be executed by cron on a daily basis. The script will then backup the database into a single .sql file and send as email attachment to my gmail account. That web app was on shared hosting hence, I was quite sure i needed to do backup of my database. My files are on git repo so I am not worried about that. Please advise. I am totally unfamiliar with AWS. Only know as much as setting up an account. That is all. Thank you.

    Read the article

  • Large, high performance object or key/value store for HTTP serving on Linux

    - by Tommy
    I have a service that serves images to end users at a very high rate using plain HTTP. The images vary between 4 and 64kbytes, and there are 1.300.000.000 of them in total. The dataset is about 30TiB in size and changes (new objects, updates, deletes) make out less than 1% of the requests. The number of requests pr. second vary from 240 to 9000 and is dispersed pretty much all over, with few objects being especially "hot". As of now, these images are files on a ext3 filesystem distributed read only across a large amount of mid range servers. This poses several problems: Using a fileysystem is very inefficient since the metadata size is large, the inode/dentry cache is volatile on linux and some daemons tend to stat()/readdir() it's way through the directory structure, which in my case becomes very expensive. Updating the dataset is very time consuming and requires remounting between set A and B. The only reasonable handling is operating on the block device for backup, copying, etc. What I would like is a deamon that: speaks HTTP (get, put, delete and perhaps update) stores data it in an efficient structure. The index should remain in memory, and considering the amount of objects, the overhead must be small. The software should be able to handle massive connections with slow (if any) time needed to ramp up. Index should be read in memory at startup. Statistics would be nice, but not mandatory. I have experimented a bit with riak, redis, mongodb, kyoto and varnish with persistent storage, but I haven't had the chance to dig in really deep yet.

    Read the article

  • How do I share a complete XP disk so it can be seen from a Windows 7 system? (To move all files to a

    - by Ian Ringrose
    This should be easier! (both computers can see the internet etc so I know the network it’s self is working) I have a normal home network with a Windows XP machine on it and the new Windows 7 (64 bit) machine. So I can transfer the files to the new Windows 7 machine, I wish to share the complete disk (and all files) from the Windows XP machine and access them from the Windows 7 machine. Is there a step by step set of instructions for doing this anywhere? So fare I have: put both computers into the same workgroup put the windows 7 machine into work network mode so it can see the XP machine in the work group shared the XP disk as read only But when I try to access a lot of the folders on the XP disks, I am told I am not allowed to access them. (I was not asked for any passwords by the windows 7 machine when I accessed the XP machine. The XP machine just has its default account with no password set on it) The XP machine runs XP home and hence has "simple file shairing" turn on. So it seems that even if I create a admin account (with password) and connect with that account, it still comes in as "guest" on the XP machine. Chooseing to share the folder I want access to rather then the top of the disk drive seems to work, but is a pain as I need to share each user's folder with a different share name. If the new computer was not a laptop, I would just plug the hard disk from the old machine into it, but being a laptop I don't have that option.

    Read the article

  • UW-IMAP server, high load for one user

    - by Bruce Garlock
    We have been experiencing a very strange anomaly, with one specific user with our UW-IMAP server. We have about 75 users using the server, and one particular user, who is in about the middle as far as used storage keeps having issues with slow speed. Most of our users all use Thunderbird 2, or Thunderbird 3. Mostly 2, because of the performance issues we have had with 3. This user was on 3, and I downgraded him to 2. The performance has gotten better, but according to the imapd processes on the server, his username is using the most CPU % and CPU time. I've already done all the usual T/S'ing: Started profile from scratch, compacted folders, re-indexed, newer faster computer, etc.. Still, this users' imapd process is always using the most CPU on the server. For troubleshooting, we setup another user which has more usage, folders, etc.. than he does, but we don't see the users process taking up most of the CPU with the imapd process. So, it almost sounds like a particular email may be the culprit, but how can we find it, if thats the problem? This has been going on for a while, and he is a management person, so his patience is about to end. Does anyone have any ideas?

    Read the article

  • Issue with InnoDB engine while enabling and [ skip-innodb ]

    - by Ahn
    How to enable InnoDB, which was previously disabled with skip-innodb option. Case 1: Disabled the innodb with skip-innodb option and show engines givens as below. Engine | Support ... | InnoDB | NO ...... Case 2: As I want to enable the innodb, I commanded the #skip-innodb option and restarted. But now the show engines even not showing the InnoDB engine in the list. ? Mysql Version : 5.1.57-community-log OS : CentOS release 5.7 (Final) Log: 120622 13:06:36 InnoDB: Initializing buffer pool, size = 8.0M 120622 13:06:36 InnoDB: Completed initialization of buffer pool InnoDB: No valid checkpoint found. InnoDB: If this error appears when you are creating an InnoDB database, InnoDB: the problem may be that during an earlier attempt you managed InnoDB: to create the InnoDB data files, but log file creation failed. InnoDB: If that is the case, please refer to InnoDB: http://dev.mysql.com/doc/refman/5.1/en/error-creating-innodb.html 120622 13:06:36 [ERROR] Plugin 'InnoDB' init function returned error. 120622 13:06:36 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 120622 13:06:36 [Note] Event Scheduler: Loaded 0 events 120622 13:06:36 [Note] /usr/sbin/mysqld: ready for connections. Version: '5.1.57-community-log' socket: '/data/mysqlsnd/mysql.sock1' port: 3307 MySQL Community Server (GPL)

    Read the article

  • HP/Lenovo alternative to Buffalo iSCSI TerraStation?

    - by Robin Day
    I'm looking at virtualising some of our infrastructure in order to allow for more resiliance and future expandability. We have successfully virtualised on single servers with Direct Attached Storage and are now looking for a more future proof solution using a high powered host (or two) and a SAN (or two). I'm thinking that the host machine will probably be an HP ProLiant DL360 G7 (all of our exisiting infrastructure is HP). Unfortunately, I am new to the world of SANs. From what I can see, the Buffalo Terrastation III is all I would need in order to setup an iSCSI SAN for VMWare to use. However, I'm a little reticent to go that way as it's a bit too "entry level" for my liking. In particular I would be very keen for more redundancy, power, networking, etc. I'm also very aware that you "get what you pay for". Therefore, can anyone reccommend equivalents from the big boys? HP/Lenovo? I have searched high and low on the HP site and seen many options but am struggling to work out if it is all the hardware I will need. Some options appear to need separate controllers from disk enclosures, etc.

    Read the article

  • Retrieve a user's Exchange database in powershell

    - by Paul
    Hey Everyone, I've scoured the interwebs for a few days now off and on to find this. I am creating a powershell script for email-enabling new user's(Exchange 2007). To give you a little background when we have a new hire, their AD account is created at our off-site helpdesk, but they don't create their email account. I'm trying to automate the process of mail-enabling the user which involves putting them in the same database as an existing user, disable imap pop activesync, and lastly email the requester of the ticket. I would like to just get prompted for the New User's name, User to Replicate(mailbox, storage group, database), and the person to email after it's been created. So if someone could just help with a command to Retrieve a user's Exchange database in powershell that would be great, but if people also want to help with my hacked up script please do so as well!!! Here is what I have so far: Write-output “ENTER THE FOLLOWING DETAILS” $DName = Read-Host “User Diplay Name" $RUser = Read-Host "Replicate User(Database Grab)" ***$RData = #get the Replicate user's mailbox database here*** $REmail = #either just use a Read-Host “Requester's Email address" or ask for Requester's name and pipe through their email address by digging for it w/ powershell Enable-Mailbox -Identity "$DName" -Database "$RData" Send-MailMessage -From "John Doe <[email protected]>" -To (put $REmail here which is the Requester's email) -Subject "Test Person's email account" -Body "Test Person's email account has been setup.`n`n`nJohn Doe`nGeneric Company`nSystems Administrator`nOffice: 123.456.7890`[email protected]" -SmtpServer genericexchange.exchange.com

    Read the article

  • Replace Linux Boot-Drive | ext3 to btrfs

    - by bardiir
    I've got a headless server running Debian Linux currently. Linux vault 3.2.0-3-686-pae #1 SMP Mon Jul 23 03:50:34 UTC 2012 i686 GNU/Linux The root filesystem is located on an ext3 partition on the main harddrive. My data is located on multiple harddrives that are bundled to a storage pool running with btrfs. UUID=072a7fce-bfea-46fa-923f-4fb0827ae428 / ext3 errors=remount-ro 0 1 UUID=b50965f1-a2e1-443f-876f-578b5f93cbf1 none swap sw 0 0 UUID=881e3ad9-31c4-4296-ae60-eae6c98ea45f none swap sw 0 0 UUID=30d8ae34-e2f0-44b4-bbcc-22d761a128f6 /data btrfs defaults,compress,autodefrag 0 0 What I'd like to do is to place / into the btrfs pool too. The ideal solution would provide the flexibility to boot from any disk in the system alike, so if the main drive fails I'd just need to swap another one into the main slot and it would be bootable like the main one. My main problem is, everything I do needs to result in a bootable system that is open to ssh logins via network as this server is 100% headless so there is no possibility to boot it from a live cd or anything like that. So I'd like to be extra sure everything works out fine :) How would I best go about this? Can anybody hint me to guides or whip something up for these tasks? Anything I forgot to think about? Copy root-data into btrfs pool, adjust mountpoints,... Adjust GRUB to boot from btrfs pool UUID or the local device where GRUB is installed Sync GRUB to all harddrives so every drive is equally bootable (is this even possible without destroying the btrfs partitions on the drives or would I need to disconnect the drives, install grub on them and then connect them back with a slightly smaller partition?)

    Read the article

  • Access NFS share from cygwin?

    - by Jason Voegele
    We have a Windows 2003 Server on which we have installed Microsoft's Services for UNIX, and we have mounted a few NFS shares that contain shared resources that we need to access from this box. When I log in to this server with remote desktop, I am able to browse the contents of the NFS shares and everything works fine. However, one use case that we have is that we need to access this server using SSH, and still be able to access the NFS shares. We are running the Cygwin SSH daemon to provide SSH access to the server, but for some reason when we log in to the Windows 2003 server using SSH we can no longer access the NFS shares. To demonstrate, here is the output of the 'mount' command, first from a Cygwin shell when logged in with remote desktop: $ mount C:/cygwin/bin on /usr/bin type ntfs (binary,auto) C:/cygwin/lib on /usr/lib type ntfs (binary,auto) C:/cygwin on / type ntfs (binary,auto) C: on /cygdrive/c type ntfs (binary,posix=0,user,noumount,auto) O: on /cygdrive/o type nfs (binary,posix=0,user,noumount,auto) P: on /cygdrive/p type nfs (binary,posix=0,user,noumount,auto) Z: on /cygdrive/z type nfs (binary,posix=0,user,noumount,auto) And now, the same 'mount' command when logged in with SSH: $ mount C:/cygwin/bin on /usr/bin type ntfs (binary,auto) C:/cygwin/lib on /usr/lib type ntfs (binary,auto) C:/cygwin on / type ntfs (binary,auto) C: on /cygdrive/c type ntfs (binary,posix=0,user,noumount,auto) Notice the missing O: P: and Z: NFS shares in the latter. Can anyone tell me why I am unable to see these NFS shares when logged in with SSH? Thanks!

    Read the article

  • Upgraded users to Win7. Now getting "path not found" when saving files or opening attachments

    - by Matt Penner
    We have a Server 2008 AD environment with about 5k users. We just rolled out Windows 7 SP1 (were XP) with great success. However, about once a day we get a few calls that a user opens a file from their Documents (the folder is on the server and redirected), edits it and attempts to save but Win7 reports that the path is not found either because it doesn't exist or no permissions. The only way to fix it is to delete the profile. In addition we get about the same number but different users saying that they cannot open attachments from Outlook 2010 due to no permission. We have to edit the temp Outlook storage path in the registry to fix it (or delete the profile). I think the two issues may be related. What scares us is that we rolled out 1 month ago and had no calls of this nature until about 2 weeks ago. It started off as one or two but seems to be growing. Any ideas? We're going to open a Microsoft ticket but I wanted to seenif anyone else has run into this. Thanks!

    Read the article

  • VMWare ESXi 5 - Expanded RAID 5 array - cannot access datastore

    - by Dayton Brown
    I'm using VMWare ESXi 5 and had a 2 TB RAID 5 setup on an HP DL360 with a P400i RAID card. I added two more 1 TB drives and using the SmartStart ACU, added the drives and expanded the logical disk. Now after booting back to ESXi, the server boots, but lists no available persistent storage. I've rescanned multiple times to no avail: the Datastore doesn't show up. I booted to GParted and the 1.8TB partition shows up, but it shows as unknown. Anyone have any good ideas? EDIT: Final Solution So after much gnashing of teeth, it was fairly simple to solve. I purchased an eSata 2 TB external drive and a PCI eSata card for my server. I then used Clonezilla to image the current partitions to my new external drive. You have to check "don't check drive sizes" in advanced mode, otherwise it will yell at you for have a smaller drive. For some reason my PCI card wouldn't boot on my HP server, so I hooked the drive up to another desktop I had, booted to VMWare, and copied the vmdk's to another drive. I'm going to blow out the RAID config and then create 1.5TB logical drives.

    Read the article

  • Creating multiple SFTP users for one account

    - by Tom Marthenal
    I'm in the process of migrating an aging shared-hosting system to more modern technologies. Right now, plain old insecure FTP is the only way for customers to access their files. I plan on replacing this with SFTP, but I need a way to create multiple SFTP users that correspond to one UNIX account. A customer has one account on the machine (e.g. customer) with a home directory like /home/customer/. Our clients are used to being able to create an arbitrary number of FTP accounts for their domains (to give out to different people). We need the same capability with SFTP. My first thought is to use SSH keys and just add each new "user" to authorized_keys, but this is confusing for our customers, many of whom are not technically-inclined and would prefer to stick with passwords. SSH is not an issue, only SFTP is available. How can we create multiple SFTP accounts (customer, customer_developer1, customer_developer2, etc.) that all function as equivalents and don't interfere with file permissions (ideally, all files should retain customer as their owner)? My initial thought was some kind of PAM module, but I don't have a clear idea of how to accomplish this within our constraints. We are open to using an alternative SSH daemon if OpenSSH isn't suitable for our situation; again, it needs to support only SFTP and not SSH. Currently our SSH configuration has this appended to it in order to jail the users in their own directories: # all customers have group 'customer' Match group customer ChrootDirectory /home/%u # jail in home directories AllowTcpForwarding no X11Forwarding no ForceCommand internal-sftp # force SFTP PasswordAuthentication yes # for non-customer accounts we use keys instead Our servers are running Ubuntu 12.04 LTS.

    Read the article

  • What's throttling the database?

    - by Troels Arvin
    Hardware: Intel x86_64 with 192GB of RAM. OS: CentOS 5.4 x86_64. DBMS: DB2 v. 9.7.1 64 bit. During certain special workloads (e.g. parallel REORGs/RUNSTATs), I've seen the server transporting 450MB/s with 25000IO/s (yes, there is probably some storage system caching happening here) while all CPU cores were happily working in an even mix of usermode/wait. And disk benchmark tools can also bring some very satisfying bandwith and IO/s numbers to the table. On the other hand, we also have another scenario: A single rather complex query with at least one large table scan. db2's "list applications" reports that the query is Executing (not locked). IO: At most 10MB/s, 500 IO/s; CPU: two cores in 99.9% wait state, all other cores 100% idle. The tables which the query reads from have been altered to have LOCKSIZE=TABLE, so I would think that lock list work is zero. What's going on in such a situation? What tools/snapshots/... can I use to gain better insight in such a case?

    Read the article

  • Known USB 2.0 devices don't install driver, but must be manually forced

    - by Darragh
    When a known USB 2.0 device is plugged in and detected, it doesn't install the driver correctly but shows a Code 28 error and lists the device under "Other Devices" in Device Manager. When view properties of this device , it shows the following status; The drivers for this device are not installed. (Code 28) There is no driver selected for the device information set or element. To find a driver for this device, click Update Driver. When updating the driver manually and selecting the appropriate driver Windows doesn't believes it's the correct driver, but you can force the installation and it works! The other condition the driver will auto-install is when the same USB device is plugged into a USB 3.0 port. Power related issues are not also causing this as I have tried vi a Docking station, USB hub. etc.. Devices tried; Jabra Headset USB-Mass Storage Device (flash disk and ext HD) MS Wireless Keyboard & Mouse USB Ethernet controller (USB-MAC controller) This is on a laptop part of a Domain with Windows 7 Ent 7601, I am logged in as a local administrator. There isn't any Group Policies blocking not signed driver or whitelisted devices on the domain. Any suggestions please feel free

    Read the article

  • Move files from ftp server to s3

    - by lev
    I would like to set up an ftp server, where users will upload files, and for each file, put it on s3 storage, and delete it from the ftp server. (the server runs on ec2 ubuntu) Here are the stuff I already tried, with no success.. Mount s3 bucket using s3fs. I followed those instructions, but there is a bug in the latest version of s3fs, that prevents it from working. The bug was fixed on the develop branch, but I don't want to use unstable version on my production. Use vsftpd and using s3cmd sync via cron to sync the files periodically. The problem with that approach, is that s3cmd can start running in the middle of a file upload, and start synching the incomplete file. Also s3cmd doesn't give any feedback it the sync fails, so I have no way of knowing if I can delete the files after the sync command finished running. Use pure-ftpd's upload script feature (which allows to run a script after a file is finished uploading), but I noticed that if the file upload was failed in the middle, the script will run anyway, and I have no way of knowing if the upload was successful or not. I've been at it for a few days now, and I'm at a loss here. Any suggestions will be welcomed.

    Read the article

  • Encrypted folders and files stay encrypted when copied

    - by user66126
    Hi all, I just tried out Windows 7 feature to encrypt a folder. I found that when I access the encrypted folder from another computer (the parent folder of the encrypted folder is shared) I can see the files there but I could not open it (which is good). But when I copy the file to another folder outside the encrypted folder (regardless it is on the same remote computer or to the computer from where I am accessing the files) then I can open the file without any problem. This might be how it works ... but that's not what I need. My question is: Can I encrypt a folder (and all files inside), access those files (create, edit) seamlessly while I am logged in normally to the computer ... but make the files stays encrypted when they were copied to another directory outside the encrypted folder? Regardless they were copied to the same computer or another computer or uploaded to a remote server. If this is not a feature that Windows natively support, is there third party software that does that? Thank you.

    Read the article

< Previous Page | 372 373 374 375 376 377 378 379 380 381 382 383  | Next Page >