Search Results

Search found 74849 results on 2994 pages for 'file folder'.

Page 1689/2994 | < Previous Page | 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696  | Next Page >

  • What is the best cloud technology to use for MongoDB/GridFS database servers

    - by Nerian
    We are going to launch a service that will require between 1 and 2 GB for file storage per paid user. I am going to use GridFS for storing files. GridFS is a module for MongoDB that allows to store large files in de database. I am pondering the different options for storing the database. But since I am unexperienced at deployment and it is my first time with Mongodb I need your experience. Criteria: I want to spend my time developing my core business, that is, my own application. I am a Ruby on Rails developer. I do not like to mess with server configuration. Hence, I would like a fully managed hosting solution. But I would like to know about any other option, if you think it is worth it. It should be able to scale. Cloud style. Pay as you go. The lower the price, the better. So far I known of these services: https://mongohq.com/pricing https://mongomachine.com/pricing https://mongolab.com/about/pricing/ http://cloudcontrol.com/add-ons/mongodb/ And they seem to be OK for common needs, that is no file storage. But I am going to use GridFS, so the size matters. These services seems to scale, in price, quite poorly. MongoHQ: The larger plan max storage is 20 GB. Seems like a very little storage, for GridFS. MongoMachine: Flat price, 2.5$ per GB. I didn't found the limit. Seems like a good price, comparing the others. MongoLab: 3.984 GB max, which I don't think I will hit, so perfect. 8$ per GB, quite costly. CloudControl: The larger plan is 20 Gb. The custom service starts at 250€ plus some unspecified charge per GB. What is your experience with these services? Any downtimes? Other possibilities? Edit: Added meaning of GridFS

    Read the article

  • Creating a bootable USB drive from a distro split over two DVD ISOs

    - by Kev
    I am searching and not finding the right way to do this. Please note, I don't think I'm trying for anything strange here. I just want to make a bootable USB stick of a single OS that happens to be larger than one DVD and happens to be larger than FAT32 will allow for in a single file. On our slow connection I spent a long time downloading CentOS 5.9's two DVD ISOs: CentOS-5.9-x86_64-bin-DVD-1of2.iso (4.4 GB) CentOS-5.9-x86_64-bin-DVD-2of2.iso (718 MB) I have a USB stick that I want to somehow get these two ISOs on. Since the first one is 4.4 GB, I can't use ISO2USB because it insists on FAT32. I cannot find an alternative that lets you specify more than one ISO image--of the same distro, I'm not trying for some fancy multi-boot thing--to put on the same stick. I guess I should have downloaded the CD ISOs, but I thought I was "saving time" because then I wouldn't have as many files to run through the md5 checker. There's no IMG file of the whole thing (only a net install version, which I don't want--I want to pre-download everything) otherwise I would've gone for that. So, given that I have these two DVD ISOs, how can I get them on a stick that will boot and make use of both of them properly to install CentOS somewhere? Again, I don't think this is anything out of the ordinary, yet I can't find software/docs that seem to support this. Am I stuck re-downloading everything in CD-sized ISOs just to do this? I found this, but it doesn't run on Windows. I am using Windows to prepare the stick.

    Read the article

  • How to repair damaged repository (which has a centralized .svn directory)?

    - by Heinrich Ulbricht
    I recently upgraded my TortoiseSVN installation to version 1.7.1. This forced me to upgrade my working copy as well. The upgrade removed all (but one) of the .svn directories from all subdirectories leaving only one in the root. Now out of the blue (of course; I suspect my antivirus software) there is an error when I for example try to clean up the working copy. I am also not able to commit anything. The error message when cleaning up is: Cleanup failed to process the following paths: C:\svn Can't open file 'C:\svn.svn\pristine\73\73bcc5fa7819f84f56b81dfa0236f0aac7b7d404.svn-base': The system cannot find the file specified. I traced the error to be related to the presence of one directory within the working copy. If I rename it then everything works. When it is present I get the error. I also deleted it and checked it out again. No change, the error persists. With previous versions I could repair damages in the .svn easily: just delete the offending folder and check out again. I cannot do this anymore because now the .svn dir is centralized. What could I do to repair my working copy?

    Read the article

  • Enable SSL with Jetty 8

    - by Jerec TheSith
    I received certificates from GoDaddy an I'm trying to enable SSL with Jetty but receive an error 107 SSL protocol error when connecting to https://server.com:8443 I generated the keystore using these commands : keytool -keystore keystore -import -alias gd_bundle -trustcacerts -file gd_bundle.crt keytool -keystore keystore -import -alias server.com -trustcacerts -file server.com.crt and placed it in /opt/jetty/etc/ And used the following configuration in jetty.xml : <Call name="addConnector"> <Arg> <New class="org.eclipse.jetty.server.ssl.SslSelectChannelConnector"> <Arg> <New class="org.eclipse.jetty.http.ssl.SslContextFactory"> <Set name="keyStore"><SystemProperty name="jetty.home" default="."/>/etc/keystore</Set> <Set name="keyStorePassword">**password1**</Set> <Set name="keyManagerPassword">**password1**</Set> <Set name="trustStore"><SystemProperty name="jetty.home" default="."/>/etc/keystore</Set> <Set name="trustStorePassword">**password1**</Set> </New> </Arg> <Set name="port">8443</Set> <Set name="maxIdleTime">30000</Set> <Set name="Acceptors">2</Set> <Set name="statsOn">false</Set> <Set name="lowResourcesConnections">20000</Set> <Set name="lowResourcesMaxIdleTime">5000</Set> </New> </Arg> </Call> Am I missing something in jetty's configuration ?

    Read the article

  • How can I avoid permission denied errors when attempting to deploy a rails app with capistrano?

    - by joshee
    Total noob here. I'm attempting to deploy an app through Capistrano. I'm getting relentless permission denied errors when I attempt to run cap deploy:update. Seemingly at least some of these errors are due to missing directories that trigger a "Permission Denied" error. (I'm doing setup on root just temporarily.) set :user, 'root' set :domain, 'domainname.com' set :application, 'appname' # adjust if you are using RVM, remove if you are not $:.unshift(File.expand_path('./lib', ENV['rvm_path'])) require "rvm/capistrano" set :rvm_ruby_string, '1.9.2' # file paths set :repository, "ssh://[email protected]/~/git/appname.git" set :deploy_to, "/var/rails/appname" # distribute your applications across servers (the instructions below put them # all on the same server, defined above as 'domain', adjust as necessary) role :app, domain role :web, domain role :db, domain, :primary => true set :deploy_via, :remote_cache set :scm, 'git' set :branch, 'master' set :scm_verbose, true set :use_sudo, false set :rails_env, :production namespace :deploy do desc "cause Passenger to initiate a restart" task :restart do run "touch #{current_path}/tmp/restart.txt" end desc "reload the database with seed data" task :seed do run "cd #{current_path}; rake db:seed RAILS_ENV=#{rails_env}" end end after "deploy:update_code", :bundle_install desc "install the necessary prerequisites" task :bundle_install, :roles => :app do run "cd #{release_path} && bundle install" end Here's my result: ** [domainname.com :: out] Cloning into '/var/rails/appname/shared/cached-copy'... ** [domainname.com :: err] Permission denied, please try again. ** [domainname.com :: err] Permission denied, please try again. ** [domainname.com :: err] Permission denied (publickey,gssapi-with-mic,password). ** [domainname.com :: err] fatal: The remote end hung up unexpectedly I'm able to ssh without a password, so not sure about that publickey error. By the way, if I run cap deploy:update without set :deploy_via, :remote_cache, here's my result: ** [domainname.com :: out] Cloning into '/var/rails/appname/releases/20120326204237'... ** [domainname.com :: err] Permission denied, please try again. ** [domainname.com :: err] Permission denied, please try again. ** [domainname.com :: err] Permission denied (publickey,gssapi-with-mic,password). ** [domainname.com :: err] fatal: The remote end hung up unexpectedly command finished Thanks a lot for your help with this.

    Read the article

  • Building PHP For MacOS

    - by Eray
    I was using XAMPP and decided to uninstall it and use MacOS' in-built apache and php modules. But while uninstalling XAMPP I deleted /usr/bin/php files and other PHP-CLI files accidentally. And I decided to install newest version of PHP (5.5.12) instead of rebuilding current version (5.4.24). Downloaded it and unzip. After this executed this command as mentioned at this guide. ./configure '--with-apxs2=/usr/sbin/apxs' '--enable-cli' '--with-config-file-path=/etc' '--with-zlib=/usr' '--enable-bcmath' '--with-bz2=/usr' '--enable-calendar' '--disable-cgi' '--with-curl=/usr' '--enable-dba' '--enable-ndbm=/usr' '--enable-exif' '--enable-fpm' '--enable-ftp' '--with-gd' '--enable-gd-native-ttf' '--enable-mbregex' '--with-mysql=mysqlnd' '--with-mysqli=mysqlnd' '--with-pear' '--with-pdo-mysql=mysqlnd' '--with-mysql-sock=/var/mysql/mysql.sock' '--with-tidy' '--enable-wddx' '--with-xmlrpc' '--enable-zip' make make install When i check phpinfo() , it's still version 5.4.24 . This line from my httpd.conf LoadModule php5_module libexec/apache2/libphp5.so /usr/libexec/apache2/libphp5.so coming from old version and i couldn't ind libphp5.so for new version. There is no libphp5.so file inside modules dir. How can i use new PHP build with Apache ? UPDATE Results of php -v command . PHP 5.5.12 (cli) (built: May 27 2014 05:17:21) Copyright (c) 1997-2014 The PHP GroupZend Engine v2.5.0, Copyright (c) 1998-2014 Zend Technologies

    Read the article

  • csvde doesn't import users

    - by The Eighth Ero
    I have a small problem as I'm a server manager beginner, I installed a Domain Controller on my Windows Server 2008, and I created three OUs, now I'm trying to add users to each OU via csvde command, but I get as a result of the operation, without mentioning any errors: > C:\csvde>csvde -i -f List.csv > Connecting to "(null)" > Logging in as current user using SSPI Importing directory from file > "List.csv" Loading entries. > 0 entries modified successfully. Below is the csv file I'm using to add 2 users to "Offshoring1" OU, the domain name is "iado.lan". DN objectClass sAMAccountName sn givenName userPrincipalNAme cn=BB NN,ou=Offshoring1,dc=iado,dc=lan user BB NN BB [email protected] cn=II YY,ou=Offshoring1,dc=iado,dc=lan user II YY II [email protected] and this the csv data as generated by Word 2011 on my mac : DN;objectClass;sAMAccountName;sn;givenName;userPrincipalNAme cn=BB NN,ou=Offshoring1,dc=iado,dc=lan;user;BB;NN;BB;[email protected] cn=II YY,ou=Offshoring1,dc=iado,dc=lan;user;II;YY;II;[email protected] I do use -k option to force import but still no success.

    Read the article

  • Grub and Renaming... Why does Kubuntu 9.10 make it so hard?

    - by NH
    I'm trying to rearrange the grub menu in Kubuntu 9.10 (similar to this post), but unfortunately, Kubuntu includes the latest (and NOT greatest) version of grub, which no longer uses the elegant menu.lst. ARG! So anyway, I'm digging around in /etc/grub.d and I can't figure out how to rename the files in order to get them to boot in another order. (on a side note, I can't get xPUD to show up in the boot list... but that is a little less important) So why doesn't it work to do sudo grub in the terminal? (that seems to be the easiest option, but that doesn't work either.) Further, why can't I rename the files? Do I need to do it in the terminal? If so, how do I rename the file with the terminal? Can I run Dolphin (or Konqueror or whatever) as root (or su)? And don't tell me I need to try CHMOD first; I already tried that, and I still couldn't rename the file.

    Read the article

  • installing mod_wsgi giving 403 error

    - by John Smiith
    installing mod_wsgi giving 403 error httpd.conf i added code below WSGIScriptAlias /wsgi "C:/xampp/www/htdocs/wsgi_app/wsgi_handler.py" <Directory "C:/xampp/www/htdocs/wsgi_app/"> AllowOverride None Options None Order deny,allow Allow from all </Directory> wsgi_handler.py status = ‘200 OK’ output = ‘Hello World!’ response_headers = [('Content-type', 'text/plain'), ('Content-Length', str(len(output)))] start_response(status, response_headers) return [output] Note: localhost is my virtual host domain and it is working fine but when i request http://localhost/wsgi/ got 403 error. <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot "C:/xampp/www/htdocs/localhost" ServerName localhost ServerAlias www.localhost ErrorLog "logs/localhost-error.log" CustomLog "logs/localhost-access.log" combined </VirtualHost> Error log [Wed Jul 04 06:01:54 2012] [error] [client 127.0.0.1] File does not exist: C:/xampp/www/htdocs/localhost/favicon.ico [Wed Jul 04 06:01:54 2012] [error] [client 127.0.0.1] client denied by server configuration: C:/xampp/Bin/apache [Wed Jul 04 06:01:58 2012] [error] [client 127.0.0.1] Options ExecCGI is off in this directory: C:/xampp/www/htdocs/wsgi_app/wsgi_handler.py [Wed Jul 04 06:01:58 2012] [error] [client 127.0.0.1] client denied by server configuration: C:/xampp/Bin/apache [Wed Jul 04 06:01:58 2012] [error] [client 127.0.0.1] File does not exist: C:/xampp/www/htdocs/localhost/favicon.ico [Wed Jul 04 06:01:58 2012] [error] [client 127.0.0.1] client denied by server configuration: C:/xampp/Bin/apache Note: My apache is not in c:/xampp/bin/apache it is in c:/xampp/bin/server-apache/

    Read the article

  • Linux disk usage analyser that acts like symlinks are real files

    - by Rory
    I am using git-annex, an extension to the DVCS git, which is designed for handling large files. It makes heavy use of symlinks. The actual large files are moved to the .git/annex directory and the original files are symlinked to there. I am running out of disk space, and need to clear up, and see what's using all my space. Usually I'd use a disk usage tool like ncdu, Baobab or Filelight. However they treat the symlink as essentially empty, and only count the file that it is pointing to as using any space. Which means when I use git-annex, it shows no space used in the main directories and lots of space used in the .git/annex directory. This is not helpful. Is there any (graphical or ncurses) based disk usage programme for linux (apt-get installable would be easie that is capable (through options or not) of counting a symlink as using up the space that the original file uses up? Many have options for different behaviour for hard links, so makes sense that some should h (I know counting symlinks as using space has flaws, like counting the space space twice, broken symlinks, etc. But that's OK for my purposes)

    Read the article

  • What's wrong with my .htaccess? Trying to simplify actual code

    - by AlexV
    This is my actual .htaccess: #If the requested URI does not end with an extension RewriteCond %{REQUEST_URI} !\.(.*) #If the requested URI is not in an excluded location RewriteCond %{REQUEST_URI} !^/(excluded1|excluded2)/ #Then serve the URI via the mapper RewriteRule .* /seo-urls/seo-urls-mapper.php?uri=%{REQUEST_URI} [L,QSA] #If the requested URI ends with .php* RewriteCond %{REQUEST_URI} \.php.*$ [NC] #If the requested file is not seo-urls-mapper.php (avoid .htaccess loop) RewriteCond %{REQUEST_FILENAME} (?<!seo-urls-mapper)\.php.*$ #Then serve the URI via the mapper RewriteRule .* /seo-urls/seo-urls-mapper.php?uri=%{REQUEST_URI} [L,QSA] Since all conditions are compatibles except the 1st ones (no extension and *.php* match) all I should have to do is to add the [OR] condition to these 2 lines, but when I'm adding it it's not working (my no extension rule don't work anymore). This is my new (not working) code: #If the requested URI does not end with an extension OR if the URI ends with .php* RewriteCond %{REQUEST_URI} !\.(.*) [OR] RewriteCond %{REQUEST_URI} \.php.*$ [NC] #If the requested file is not seo-urls-mapper.php (avoid .htaccess loop) RewriteCond %{REQUEST_FILENAME} (?<!seo-urls-mapper)\.php.*$ #If the requested URI is not in an excluded location RewriteCond %{REQUEST_URI} !^/(excluded1|excluded2)/ #Then serve the URI via the mapper RewriteRule .* /seo-urls/seo-urls-mapper.php?uri=%{REQUEST_URI} [L,QSA] Hopefully someone will be able to clarify this issue... I guess I don't fully understand the use of [OR]. Thanks!

    Read the article

  • Oracle Error ORA-12560 TNS:Protocol Adapter error?

    - by David Basarab
    I am using Oracle Database 10g. Both Servers are Windows 2003. I have an Orcale Database set up on one server. Here is the TNSNames.ora from the server with the database. # tnsnames.ora Network Configuration File: C:\oracle\product\10.2.0\db_1\network\admin\tnsnames.ora # Generated by Oracle configuration tools. ORCL.VIRTUALHOLD.COM = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = databaseServer)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = orcl) ) ) The Environmental Variables on the Server are ORACLE_HOME = C:\oracle\product\10.2.0\db_1 ORACLE_SID = orcl I am trying to connect to it from another box that has Oracle Client installed. Here is the tnsnames.ora installed on the other client server. # tnsnames.ora Network Configuration File: C:\oracle\product\10.2.0\client_1\network\admin\tnsnames.ora # Generated by Oracle configuration tools. ORCL = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = databaseServer)(PORT = 1521)) ) (CONNECT_DATA = (SERVICE_NAME = orcl) ) ) ORACLE_HOME = C:\oracle\product\10.2.0\client_1 ORACLE_SID = orcl Locally on the database server I can connect to through sqlplus with no issues. On the client machine I keep getting the error: ORA-12560: TNS:protocol adapter error What am I missing? Does the client TNSNames.ora need to be different?

    Read the article

  • need some help figuring out clamav & monit monitoring error...unixsocket...

    - by Ronedog
    I need a bit of help figuring something out. First off, I'm not very well versed with FreeBSD servers, etc. but with some direction hopefully I can get this fixed. I'm using FreeBSD and installed Monit so I could monitor some of the processes that run tomcat, apache, mysql, sendmail, clamav. So far, I'm only successful in getting apache & mysql to be monitored. I'm getting this error for clamav in the log file for /var/log/monit.log 'clamavd' failed, cannot open a connection to UNIX[/usr/local/etc/rc.d/clamav-clamd] My config file for clamav in /etc/monitrc is: #################################################################### # CLAMAV Virus Checks #################################################################### check process clamavd with pidfile /var/run/clamav/clamd.pid group virus start program = "/usr/local/etc/rc.d/clamav-clamd start" stop program = "/usr/local/etc/rc.d/clamav-clamd stop" if failed unixsocket /usr/local/etc/rc.d/clamav-clamd then restart if 5 restarts within 5 cycles then timeout Honestly, I really don't know much of what's going on here. My host who helped me get the box set up basically installed clamav, but doesn't offer this kind of detail in supporting me, so I'm left to figure this stuff out on my own as I own the box, but they provide the isp service. Is there anyone who can help me troubleshoot this? Thanks for your help in advance.

    Read the article

  • Ubuntu 10.04 bind9 local zone include files and apparmor

    - by Gilgongo
    Rather than putting all my zones in one named.conf.local file, I'd like to have them in groups that I can manage as separate files. So, I've tried putting the following into named.conf.local: include "/home/zones/group1.conf"; include "/home/zones/group2.conf"; include "/home/zones/group3.conf"; However, when I restart named, I see "permission denied" errors in the logs. Ubuntu uses apparmor for bind, so I also added the following in /etc/apparmor.d/usr.sbin.named: /home/zones/group1.conf r, /home/zones/group1.conf r, /home/zones/group1.conf r, Now, when I re-start named, all appears to be well. Zones are loaded (I think). However, a day or two later, I see my secondary name server complaining that the primary is telling it that it's not authoritative for those domains. I then have to put all the domains back into the named.conf.local file again. How can I get bind9 to use include files in this way? I don't know much about apparmor, so that may or may not be the issue here, but I've used include files in this way on Debian OK.

    Read the article

  • Syntax error at '{'; expected '}' when using nagios in puppet

    - by jiangchengwu
    It's a big problem to me, because I'm not familiar with puppet. ERROR on the puppetmaster: debug: importing '/etc/puppet/manifests/nodes/group-1.pp' err: Could not parse for environment production: Syntax error at '{'; expected '}' at /etc/puppet/manifests/nodes/group-1.pp:6 ERROR on the puppet client: err: Could not retrieve catalog from remote server: Error 400 on SERVER: Could not parse for environment production: Syntax error at '{'; expected '}' at /etc/puppet/manifests/nodes/group-1.pp:6 in group-1.pp: node 'group1' { include ntp class { 'nagios::host': #this is line 6 nodename => $clientcert, appname => 'test', } } nagios::host in module module/nagios/host.pp code are here: class nagios::host($nodename, $hostgroup) { file { '/usr/lib/nagios/plugins': mode = "755", require = Package["nagios-plugins"], } ... @@nagios_service { "${nodename}_check_ssh": ensure => present, use => 'generic-service', host_name => "${nodename}", notification_interval => 60, flap_detection_enabled => 0, service_description => "SSH", check_command => "check_ssh", target => "/etc/nagios3/services.d/${nodename}.cfg", } } and the file module/nagios/init.pp is blank How could I fix it ?

    Read the article

  • How to correctly deploy Adobe Reader 9.1

    - by Ben Gillam
    Hi I have recently tried to deploy Adobe Reader 9.1 onto our network here. (SBS 2003 server and XP Workstations) I followed the instructions for the extraction of the installer and .msi and then creating a .mst transform file to set custom options. (Suppress EULA, dont create desktop icon etc) I then added the package to my deployment GPO applied the relevant .mst file and preceded to deploy accross the network. The software package is computer assigned to be installed prior to logon, to avoid user permissions issues. The package deploys correctly to computers and will run perfectly fine if you run from a shortcut, however when trying to view a pdf from within a web browser it fails with the following message. "The adobe acrobat/reader that is running can not be used to view PDF files in a web browser. Adobe Acrobat/Reader version 8 or 9 is required. Please exit and try again" I have found many pages on google refering to this problem, but none appear to be in relation the problems I have found. http :// kb2.adobe.com/cps/405/kb405461.html These fixes recommend correcting a registry entry (which i should mention is missing after the deployed installation. However this does not work. Switching off display in a browser - Seems to defeat the object of fixing the problem Removing old versions - There arent any. Trying with a different user - This affects all users of all privalige levels on all computers. On my workstation I uninstalled Acrobat Reader 9.1 then reinstalled manually using the same installation source files and it works fine. has anyone sucsessfully deployed AR9.1 on their domain and if so how? For the time being I have downloaded the older 8.1.3 release and deployed this in the same way which works fine, but would like to be using the up to date version. Thanks

    Read the article

  • Fast extraction of a time range from syslog logfile?

    - by mike
    I've got a logfile in the standard syslog format. It looks like this, except with hundreds of lines per second: Jan 11 07:48:46 blahblahblah... Jan 11 07:49:00 blahblahblah... Jan 11 07:50:13 blahblahblah... Jan 11 07:51:22 blahblahblah... Jan 11 07:58:04 blahblahblah... It doesn't roll at exactly midnight, but it'll never have more than two days in it. I often have to extract a timeslice from this file. I'd like to write a general-purpose script for this, that I can call like: $ timegrep 22:30-02:00 /logs/something.log ...and have it pull out the lines from 22:30, onward across the midnight boundary, until 2am the next day. There are a few caveats: I don't want to have to bother typing the date(s) on the command line, just the times. The program should be smart enough to figure them out. The log date format doesn't include the year, so it should guess based on the current year, but nonetheless do the right thing around New Year's Day. I want it to be fast -- it should use the fact that the lines are in order to seek around in the file and use a binary search. Before I spend a bunch of time writing this, does it already exist?

    Read the article

  • How to stop Firefox on an SSD from freezing when using the search box or submitting a form?

    - by sblair
    Firefox usually freezes for about a second whenever I search for something from the toolbar search box, when submitting a form, or when clearing the search box history. I suspect it has something to do with the auto-complete feature. Using Windows 7's Resource Monitor, the problem seems to be from the file: C:\Users\<username>\AppData\Roaming\Mozilla\Firefox\Profiles\<profile>\formhistory.sqlite-journal I believe this is a temporary file which caches database writes. The following screenshot shows the very high response times from six different searches, and that the queue length on drive C shoots off the scale: My Firefox profile is on an Intel X25-M G2 SSD. The problem doesn't seem to occur if I create a new profile on a hard disk drive. However, I'd like to know why the problem exists on the SSD in the first place (because it's an annoying problem which contradicts the reason I bought an SSD, and it might happen with other applications too), and how to prevent it. It still occurs if Firefox is started in safe mode, and with the recent beta versions. Updates: VACUUMing the Firefox profile databases does not help with this problem. The SSD Optimizer in the Intel SSD Toolbox does not help either.

    Read the article

  • Word 2010 does not save as Word 2003 XML

    - by Peter
    I have a document which was created in Word 2010, but for use in a particular application, it needs to be saved in Word 2003 XML format. When I try the normal "Save as" via the File menu (choosing Word 2003 XML format to save as), Word 2010 thinks for a while, and then presents the "Save as" dialog to me again, suggesting that I save the document as .docx. Trying to get around this, I saved the document as .doc (i.e. Word 97-2003 document). This worked fine. But when I try to save this .doc file as Word 2003 XML, again Word 2010 thinks for a while, and then presents the "Save as" dialog, suggesting this time that I save the document as .doc. Oh, and I need to say that this only happens on a specific document - all others work fine. I know I should try a process of elimination and see what is causing the symptoms, but it would nice to have an answer "in principle". Is there perhaps a setting somewhere that I have enable? Does anyone know what's going on here?

    Read the article

  • wild card redirects issue giving error this webpage has a redirect loop

    - by david
    In my website I changed or better word modified the directory name ""vehicles-cars"" to ""vehicles-cars-for-sale"" when i tried to redirect using wild card redirect my old directory name to new directory name in my web hosting cpanel account. every time when i open pages from that directory i am getting error code. This web-page has a redirect loop. The website is php. The problem is that that my lots of pages from old directory are indexed in googles and they are getting duplicate contents. If I redirect single page it works perfect but there are lots of pages so I need wild card redirect to redirect whole directory . I really need some advice what to do with this problem. Here is .htaccess file code for redirect thanks RewriteEngine on RewriteCond %{HTTP_HOST} ^example\.com$ [OR] RewriteCond %{HTTP_HOST} ^www\.example\.com$ RewriteRule ^vehicles\-cars\/?(.*)$ "http\:\/\/example\.com\/vehicles\-cars\-for\-sale\/$1" [R=301,L] i have other wilcard redirect of whole directory with same code and its working perfect here is the code in .htaccss file which is same as above and working perfect for this directory RewriteCond %{HTTP_HOST} ^adsbuz\.com$ [OR] RewriteCond %{HTTP_HOST} ^www\.adsbuz\.com$ RewriteRule ^autos\/?(.*)$ "http\:\/\/adsbuz\.com\/vehicles\-cars\-for\-sale\/$1" [R=301,L] so i dont understand whats wrong with the above code please i really need some expert advice thanks again

    Read the article

  • Mount EC2 instance via SSH on Mac OS X

    - by darkporter
    OK I just can't figure this out. I have an EC2 instance, which I'm able to SSH into just fine with: ssh -i XXXX.pem [email protected] I can even make it slick from the command line by creating a ~/.ssh/config with this in it: Host XXXX HostName XXXX User ubuntu IdentityFile ~/.ec2/XXXX.pem Which allows me to simple do a ssh XXXX with no -i option. Now, I want to mount this via SSH. I've tried MacFuse/SSHFS, MacFusion and ExpandDrive, but no luck. It's supposed to "just work" but the SSH-related command line utilities and the Keychain Access program in OS X is confusing and opaque to me. From what I've read, these GUI programs don't care about .ssh/config, they care about the Keychain. Somehow I can associate my domain name I'm connecting to with a particular "identity" private key file (.pem file) but I have no idea how. I tried this: ssh-add -K XXXX.pem Which does add to the Keychain but it's not associated to a particular domain. These GUI mounting programs I mentioned all just spin and do nothing when I try to connect passwordless. No keychain prompt, no nothing. I've pretty much given up and I'm thinking about just setting up an SMB server, but I'd rather just go over SSH since I believe it's possible.

    Read the article

  • How to prevent Gnome-shell's Alt+Tab from grouping windows from similar apps?

    - by wleoncio
    I love pretty much everything about how Gnome Shell handles app-switching through Alt+Tab. My one gripe with it, though, is how it forces the user to use Alt+` to switch between windows of the same app. This is very annoying for me, because now I have to keep in mind if the last window I was using belonged to the same app as the current window or not. Definitely a nuisance for power users who thinks in terms of "windows I'm working with" instead of "applications I'm working on". I've tried the AlternateTab extension ( https://extensions.gnome.org/extension/15/alternatetab/ ), but it's looks way too ugly for me. Not to mention that in the end all I want is to remap Alt+(key above tab) to Alt+Tab on this application. I guess one option would be to just tweak Gnome-shell. My guess is that I should tinker with the altTab.js file at /usr/share/gnome-shell/js/ui/, but the file is too long and overwhelming for someone like me, who doesn't know JavaScript. Does anyone know how I can make Gnome Shell stop grouping windows by applications?

    Read the article

  • Completely unintuitive Apache/PHP memory-freeing behavior

    - by David
    Okay, this one's weird. I have a Turnkey Linux server with a gig of dedicated RAM. It's running WP3.2 with a boatload of plug-ins. It's a new site, so it has very limited traffic (other than search engines, maybe 20 hits a week). Now, for a few weeks, every few days, it would max out on main RAM, start eating up virtual RAM, and then crash. It's had this behavior for a while and I've been trying to figure out which element was causing the crash. Nine days ago, I pointed my external server monitor to this server. I wrote a 5-line HTML file (not PHP and not WP) that the server monitor accesses every minute, to see if the server is up. So, now, nine days later, the server has been rock solid, up all the time, no memory leak at all. I changed NOTHING on the server itself to see this behavior change. Have you EVER seen anything like this? All the server monitor is doing is retrieving a single, super-simple HTML file and all the memory leak problems have gone away. Weird, eh?

    Read the article

  • Converting Audio To Video Output and Attaching Text?

    - by ZeeMan
    I am currently working on a project and before i get started i thought it'd be nice to check with stackOverflow community, and see maybe they can help me with this. The Idea: I have about a thousand MP3 files that i need to convert into Video files to be upload on Youtube for my work. Here is where it gets tricky i need to also attach the Text associated with the Audio to the Video as an Image. I was thinking .ppt. The Problem: I can do this one audio file at a time but it would take me a zillion years. lol!! The Question: Can I Create Some Kind Of Program Using Let's Say XML or JavaScript Or XHTML or some other programming language to do a MASS content creation and all i have to do is feed it the Information?? possibly a script?? or is it possible to create an example .ppt file and then hack it so that i can have it reproduce itself with different information?? The Note: Thanks U In Advance For Helping Out!!! Regards, ZeeMan!!!

    Read the article

  • DNSSEC - First Signature

    - by Arancha
    I'm testing DNSSEC with Bind 9.7.2-P2. I have a question regarding the first signature created over a zone that already exists. I'm using dynamic DNS. I create the first two keys: one KSK and one ZSK. According to https://datatracker.ietf.org/doc/draft-ietf-dnsop-dnssec-key-timing/, the first ZSK needs to be published for an interval equal to Ipub, before it can be active. I create the ZSK with a Publication date previous to its Activation date. I restart the service and I can see that the key is published at Publication date, but it's no active later, when Activation date arrives. This is the configuration of the zone dnssec.es at the named.conf file: zone "dnssec.es" { auto-dnssec maintain; update-policy local; sig-validity-interval 1; key-directory "dnssec/keys_dnssec"; type master; file "dnssec/db.dnssec.es"; }; Any clue?? Regards

    Read the article

< Previous Page | 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696  | Next Page >