Search Results

Search found 26517 results on 1061 pages for 'large directory'.

Page 340/1061 | < Previous Page | 336 337 338 339 340 341 342 343 344 345 346 347  | Next Page >

  • Overwrite previous output in Bash instead of appending it

    - by NES
    For a bash timer i use this code: #!/bin/bash sek=60 echo "60 Seconds Wait!" echo -n "One Moment please " while [ $sek -ge 1 ] do echo -n "$sek " sleep 1 sek=$[$sek-1] done echo echo "ready!" That gives me something like that One Moment please: 60 59 58 57 56 55 ... Is there a possibility to replace the last value of second by the most recent so that the output doesn't generate a large trail but the seconds countdown like a real time at one position? (Hope you understand what i mean :))

    Read the article

  • How to map DNS with my new IP address? [closed]

    - by Carsen
    I have installed apache2 in my ubuntu server. In apache2.conf, i have specified this <VirtualHost *:80> ServerName something.in DocumentRoot somewhere/public <Directory somewhere/public> AllowOverride all Options -MultiViews </Directory> </VirtualHost> Also, i have my Domain - something.in registered with Go Daddy. There i have changed A(HOST) to point to XXX.XXX.XX.XXX which is my public address. But when i type something.in in browser, i am not getting my apps home page. I got my public IP address as "XXX.XXX.XX.XXX is Natted to XX.XX.X.XX". which IP address should i use in my DNS settings? How to make apache2 in my ubuntu server listen to request for something.in?

    Read the article

  • NTFS disk mounted as fuseblk in ubuntu 12.10 is very slow and a lot of errors when rsync. Is that not a rare thing?

    - by Pablo Marin-Garcia
    I am having problems with a NTFS disk mounted as a fuseblk in my ubuntu 12.10 through external usb3. When I did a 1.1TB backup with rsync the speed was 1-2MB/s (wiht a ext4 disk speed was 70 MB/s before and after trying the NTFS disk). Also after one hour errors started to appear: rsync: write failed on "xxx": No such file or directory recv_files: "yyy" is a directory #but this file is a FILE not a dir ??!! .... As this is the first time I have mounted the NTFS in linux for heavy usage (the data would be used in windows afterwards), I would like to know if this kind of thinks are common o was only that something became unstable in my system and a simply restart would probably have solved it. This leads me to the these questions: Can I trust fuse for manage NTFS disks? Or is a problem of the NTFS tools in linux not yet totally stables for writing? Do people is still suffering from low performance with fuse-NTFS vs ext4 (in the past I have read about people complaining about this)?

    Read the article

  • Are short identifiers bad?

    - by Daniel C. Sobral
    Are short identifiers bad? How does identifier length correlate with code comprehension? What other factors (besides code comprehension) might be of consideration when it comes to naming identifiers? Just to try to keep the quality of the answers up, please note that there is some research on the subject already! Edit Curious that everyone either doesn't think length is relevant or tend to prefer larger identifiers, when both links I provided indicate large identifiers are harmful!

    Read the article

  • The Road to Professional Database Development: Database Normalization

    Not only is the process of normalization valuable for increasing data quality and simplifying the process of modifying data, but it actually makes the database perform much faster. To prove the point, Peter Larsson takes a large unnormalised database and subjects it to successive stages of normalisation. Get smart with SQL Backup ProGet faster, smaller backups with integrated verification.Quickly and easily DBCC CHECKDB your backups. Learn more.

    Read the article

  • Local SEO Tools For SME's

    For many large corporations the focus of their SEO strategy will be on a national scale. But often for small and medium-sized enterprises a more local view should be taken to maximise visibility with their target audience, and ensure they are reaching their potential client base on a regular basis. Alongside the traditional search engine optimisation tactics there are a number of tools that can be incorporated within local strategies.

    Read the article

  • Multiple PHP versions running as cgi

    - by Pierre
    I'm trying to install a second version of PHP, to run alongside the current version of php. I've compiled the latest php source from github (5.5-DEV), and I'm trying to run it as CGI. Here is my virtual host config: <VirtualHost *:8055> DocumentRoot /Library/WebServer/Documents/ ScriptAlias /cgi-bin/ /usr/local/php55/cgi Action php55-cgi /cgi-bin/php-cgi AddHandler php55-cgi .php <Directory /Library/WebServer/Documents/> Options Indexes FollowSymLinks Includes ExecCGI AllowOverride All Order Allow,Deny Allow from all </Directory> DirectoryIndex index.html index.php </VirtualHost> But when I go to http://127.0.0.1:8055/info.php, I get the following error: Forbidden You don't have permission to access /cgi-bin/php-cgi/info.php on this server Edit I'm now switching between LoadModule php5_module /usr/local/php54/libphp5.so and LoadModule php5_module /usr/local/php55/libphp5.so It works for now, but is not ideal. I would like to have the different versions of php on different virtual hosts

    Read the article

  • Move MySQL database while instance is online

    - by Mike Scott
    I have a MySQL instance containing a number of databases, one of which is an archive database (although using the INNODB rather than ARCHIVE storage engine) that is not queried or written to in normal operation. The data filesystem is filling up and I'd like to move the archive database's data directory to a different filesystem (and then symlink it back, obviously). If there are no SQL statements attempting to query or update the data during the move, can I safely do this while the MySQL instance and the other databases stay online and in use? I plan to rsync the database directory to the new filesystem, then rename the old one on the original filesystem to something different and create the new symlink. lsof reports that MySQL does have the .ibd files open, so presumably it would have to reopen them.

    Read the article

  • apache 2.4 php-fpm proxypassmatch for pretty urls

    - by tubaguy50035
    I have a URL like http://newsymfony.dev/app_dev.php/_profiler/5080653d965eb. I would like to send this script to PHP-FPM. I currently have this as my vhost: VirtualHost *:80> ServerName newsymfony.dev DocumentRoot /home/COMPANY/nwalke/www/newsymfony.dev/web/ ErrorLog /var/log/apache2/error_log LogLevel info CustomLog /var/log/apache2/access_log combined <Directory /home/COMPANY/nwalke/www/newsymfony.dev/web/> AllowOverride All Require all granted DirectoryIndex app.php </Directory> ProxyPassMatch ^/(.*\.php)$ fcgi://127.0.0.1:9003/home/COMPANY/nwalke/www/newsymfony.dev/web/$1 </VirtualHost> If I browse to app_dev.php it works just fine. But if I do app_dev.php/_profiler/5080653d965eb I get a 404 and the request never gets sent to FPM. How can I alter my ProxyPassMatch to pass anything with .php in the URL? I'm trying to do this with Symfony, but I'm pretty sure it applies to everything.

    Read the article

  • Installing Wordpress - constant PHP/MySQL extension appears missing

    - by Driss Zouak
    I've got Win2003 w/IIS6, PHP 5 and MySQL installed. I can confirm PHP is installed correctly because I have a testMe.php that runs properly. When I run the Wordpress setup, I get informed that Your PHP installation appears to be missing the MySQL extension which is required by WordPress. But in my PHP.ini in the DYNAMIC EXTENSIONS section I have extension=php_mysql.dll extension=php_mysqli.dll I verified that mysql.dll and libmysql.dll are both in my PHP directory. I copied my libmysql.dll to the C:\Windows\System32 directory. When I try to run the initial setup for WordPress, I get this answer. I've Googled setting this up, and everything comes down to the above. I'm missing something, but none of the instructions that I've found online seem to cover whatever that is.

    Read the article

  • What is safe to exclude for a full system backup?

    - by seb
    Hi, I'm looking for a list which paths/files are safe to exclude for a full system/home backup. Considering that I have a list of installed packages. /home/*/.thumbnails /home/*/.cache /home/*/.mozilla/firefox/*.default/Cache /home/*/.mozilla/firefox/*.default/OfflineCache /home/*/.local/share/Trash /home/*/.gvfs/ /tmp/ /var/tmp/ not real folders but can cause severe problems when 'restoring' /dev /proc /sys What about... /var/ in general? /var/backups/ - can get quite large /var/log/ - does not require much space and can help for later comparison /lost+found/

    Read the article

  • Raspberry Pi: mvoe home & var folders to USB HDD?

    - by doni49
    I have a Raspberry Pi running raspian. I've installed Apache2, PHP & MySQL. Apache & MySQL are both configured to use sub-directories of /var. I'd like to use a directory on my USB HDD instead of my SD card. I'd also like to move the /home directory to the USB HDD. I'd like to avoid re-partioning my HDD if possible. I thought maybe I could use a symlink to tell raspian that /var is really at /media/USBHDD1/var and /home is really at /media/USBHDD1/home. I tried it last night but couldn't get it to work.

    Read the article

  • Redirect URL within Apache VirtualHost?

    - by DisgruntledGoat
    I have a dedicated server with Apache, on which I've set up some VirtualHosts. I've set up one to handle the www domain as well as the non-www domain. My VH .conf file for the www: <VirtualHost *> DocumentRoot /var/www/site ServerName www.example.com <Directory "/var/www/site"> allow from all </Directory> </VirtualHost> With this .htaccess: RewriteEngine on RewriteBase / RewriteCond %{HTTP_HOST} ^www.example.com [NC] RewriteRule ^(.*)$ http://example.com/$1 [L,R=301] Is there a simple way to redirect the www to the non-www version? Currently I'm sending both versions to the same DocumentRoot and using .htaccess but I'm sure I must be able to do it in the VirtualHost file.

    Read the article

  • Projected Results: Sound project management practices, combined with a complete technology platform, have an immediate and lasting impact on an organization’s bottom line.

    - by Melissa Centurio Lopes
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Article By: Alan Joch, is a business and technology writer who specializes in enterprise applications, cloud computing, mobile computing, and the Web. It’s no secret that complex, large-scale projects need close management controls to ensure that they’re delivered on time and on budget. But now there’s growing evidence that failing to meet these goals can have far-reaching consequences, not only for the reputations and value of individual organizations but also for the tenure of their top executives. Government watchdogs forced one large contractor to suspend a multibillion-dollar defense program—and delay payment receipts—until a better management system was launched to more accurately track spending, project milestones, and other fundamental metrics. Significant delays in the opening of the £4.3 billion Terminal 5 at Heathrow Airport impaired an airline’s operations and contributed to a drop in its share prices. These real-world examples are noteworthy because of the huge financial risks they created. They’re also far from being isolated cases. Research by the Economist Intelligence Unit found that only 11 percent of companies claimed they delivered expected ROI on major capital projects 90 percent of the time or more. In addition, 12 percent of respondents said they achieved planned ROI less than half the time. According to Phil Thornton, lead consultant at the analyst firm Clarity Economics, the numbers demonstrate obvious challenges related to managing risks, accurately predicting ROI, and consistently delivering bottom-line growth for major capital investments “Portfolio management is a path to improve your organization’s competitive advantage. It helps make sure your organization is investing in the right things and not spending its time on things that are not delivering the intended results for the firm.” Read the full article here

    Read the article

  • Oracle WebCenter in Action: Best Practices from Oracle Consulting

    - by Kellsey Ruppel
    Oracle WebCenter in Action: Best Practices from Oracle ConsultingSee concrete, real-world examples of deployments throughout the Oracle WebCenter stack. Oracle Consulting will lead you through a discussion about best practices and key customer use cases, as well as offer practical tips to support web experience management, enterprise content management, and portal deployments.Watch this webcast as our presenters discuss: Best practices for deployments of large complex architectures with Oracle WebCenter Sites Key deployments and helpful hints for Oracle WebCenter Content Performance tuning takeaways when using Oracle WebCenter Portal Watch the webcast by registering now. REGISTER NOW

    Read the article

  • How do your busiest people transfer their knowledge?

    - by Wikis Commit At Area 51
    We have recently polled our company wide wiki users and found out that there are two large groups of users: people with lots of knowledge but (who claim they have) no time to document people with time but (who claim they have) not enough knowledge worth documenting Each group covered almost 50% of the users! How do your companies handle this? That is, how do you encourage your busiest / most knowledgeable people to share their knowledge?

    Read the article

  • OS X Mavericks Won't Connect To Ubuntu Server (Netatalk, Avahi)

    - by Andy Ibanez
    I'm really sorry for posting this. I know it may have been asked a thousand times. I have googled like crazy and I'm on the verge of desperation here. Basically, I followed this guide: http://motionsoundfx.com/2012/05/ubuntu-vnc-afp-macosx/ To create a small personal file server. When I installed it, I was able to connect to it just fine, I connected with my Ubuntu username and password and I was able to see the home directory. But later, I had to restart the file server so I could prepare a couple of other hard drives to put in. When the server restarted, I tried to connect to it, but I got an error message on my Mac: "The version of the server you're trying to connect to is not supported. Please contact your system administrator to solve this problem." Again, I have googled like crazy for this, and everybody says it is a problem with OS X Lion and up (assuming it affects Mavericks too). I have tried all the fixes mentioned for Lion and Mountain Lion and I haven't had any luck. That's the reason I'm posting this here: I suspect the problem is with my Ubuntu server. This happened after I restarted the server. Before restarting the server, I just put in my credentials and saw my home directory. Something when I restarted the server must have been messed up. I have found some other solutions, including to use "SHX2" in the conf file, but it hasn't worked for me. I ask for your help to solve this issue. Also please understand I'm completely illiterate when it comes to Linux. This is a nice chance to me to learn the OS so please give me detailed steps to do things if you deem it necessary. Thank you! I'm using Ubuntu Server 13.10 (the latest one as of today).

    Read the article

  • phpize: m4 error just in one extension

    - by Francois
    On Linux, I installed php 5.3.8 from source. Using phpize for installing an extension works fine but not on one specific extension (mysqlnd). # cd /opt/php/5.3.8/ext/pdo && /opt/php/5.3.8/bin/phpize ... this runs ok # cd /opt/php/5.3.8/ext/mysqlnd && /opt/php/5.3.8/bin/phpize Cannot find config.m4. Make sure that you run '/opt/php/5.3.8/bin/phpize' in the top level source directory of the module` As you can see error can not be that I am not on top level source directory since I am. I tried to call phpize from ext folder - did not work either! For info I have m4 installed Any idea? Thanks :)

    Read the article

  • 403 Forbidden

    - by demas
    Here is my Nginx config: user pass users; worker_processes 1; events { worker_connections 1024; } http { passenger_root /usr/lib64/ruby/gems/1.8/gems/passenger-3.0.7; passenger_ruby /usr/bin/ruby; include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; server { listen 80; server_name some.another.ru; root /www/public/redmine; passenger_enabled on; rails_env development; } } Here is Nginx log: 2011/06/02 12:53:57 [error] 45986#0: *1 directory index of "/www/public/redmine/" is forbidden, client: **.*.**.***, server: some.another.ru, request: "GET / HTTP/1.1", host: "some.another.ru" 2011/06/02 12:53:59 [error] 45986#0: *1 open() "/www/public/redmine/favicon.ico" failed (2: No such file or directory), client: **.*.**.***, server: some.another.ru, request: "GET /favicon.ico HTTP/1.1", host: "some.another.ru" What is the reason of this error and how can I fix it?

    Read the article

  • Ant trouble with environment variables on Ubuntu

    - by Inaimathi
    Having some trouble with with ant reading environment variables in Ubuntu 9.1. Specifically, the build tasks my company uses has a token like ${env.CATALINA_HOME] in the main build.xml. I set CATALINA_HOME to the correct value in /etc/environment, ~/.pam_environment and (just to be safe) my .bashrc. I can see the correct value when I run printenv from bash, or when I eval (getenv "CATALINA_HOME") in emacs. Ant refuses to build to the correct directory though; instead I get a folder named ${env.CATALINA_HOME} in the same directory as my build.xml. Any idea what's happening there, and/or how to fix it?

    Read the article

  • ANTS Memory Profiler 7.0

    - by James Michael Hare
    I had always been a fan of ANTS products (Reflector is absolutely invaluable, and their performance profiler is great as well – very easy to use!), so I was curious to see what the ANTS Memory Profiler could show me. Background While a performance profiler will track how much time is typically spent in each unit of code, a memory profiler gives you much more detail on how and where your memory is being consumed and released in a program. As an example, I’d been working on a data access layer at work to call a market data web service.  This web service would take a list of symbols to quote and would return back the quote data.  To help consolidate the thousands of web requests per second we get and reduce load on the web services, we implemented a 5-second cache of quote data.  Not quite long enough to where customers will typically notice a quote go “stale”, but just long enough to be able to collapse multiple quote requests for the same symbol in a short period of time. A 5-second cache may not sound like much, but it actually pays off by saving us roughly 42% of our web service calls, while still providing relatively up-to-date information.  The question is whether or not the extra memory involved in maintaining the cache was worth it, so I decided to fire up the ANTS Memory Profiler and take a look at memory usage. First Impressions The main thing I’ve always loved about the ANTS tools is their ease of use.  Pretty much everything is right there in front of you in a way that makes it easy for you to find what you need with little digging required.  I’ve worked with other, older profilers before (that shall remain nameless other than to hint it was created by a very large chip maker) where it was a mind boggling experience to figure out how to do simple tasks. Not so with AMP.  The opening dialog is very straightforward.  You can choose from here whether to debug an executable, a web application (either in IIS or from VS’s web development server), windows services, etc. So I chose a .NET Executable and navigated to the build location of my test harness.  Then began profiling. At this point while the application is running, you can see a chart of the memory as it ebbs and wanes with allocations and collections.  At any given point in time, you can take snapshots (to compare states) zoom in, or choose to stop at any time.  Snapshots Taking a snapshot also gives you a breakdown of the managed memory heaps for each generation so you get an idea how many objects are staying around for extended periods of time (as an object lives and survives collections, it gets promoted into higher generations where collection becomes less frequent). Generating a snapshot brings up an analysis view with very handy graphs that show your generation sizes.  Almost all my memory is in Generation 1 in the managed memory component of the first graph, which is good news to me, because Gen 2 collections are much rarer.  I once3 made the mistake once of caching data for 30 minutes and found it didn’t get collected very quick after I released my reference because it had been promoted to Gen 2 – doh! Analysis It looks like (from the second pie chart) that the majority of the allocations were in the string class.  This also is expected for me because the majority of the memory allocated is in the web service responses, so it doesn’t seem the entities I’m adapting to (to prevent being too tightly coupled to the web service proxy classes, which can change easily out from under me) aren’t taking a significant portion of memory. I also appreciate that they have clear summary text in key places such as “No issues with large object heap fragmentation were detected”.  For novice users, this type of summary information can be critical to getting them to use a tool and develop a good working knowledge of it. There is also a handy link at the bottom for “What to look for on the summary” which loads a web page of help on key points to look for. Clicking over to the session overview, it’s easy to compare the samples at each snapshot to see how your memory is growing, shrinking, or staying relatively the same.  Looking at my snapshots, I’m pretty happy with the fact that memory allocation and heap size seems to be fairly stable and in control: Once again, you can check on the large object heap, generation one heap, and generation two heap across each snapshot to spot trends. Back on the analysis tab, we can go to the [Class List] button to get an idea what classes are making up the majority of our memory usage.  As was little surprise to me, System.String was the clear majority of my allocations, though I found it surprising that the System.Reflection.RuntimeMehtodInfo came in second.  I was curious about this, so I selected it and went into the [Instance Categorizer].  This view let me see where these instances to RuntimeMehtodInfo were coming from. So I scrolled back through the graph, and discovered that these were being held by the System.ServiceModel.ChannelFactoryRefCache and I was satisfied this was just an artifact of my WCF proxy. I also like that down at the bottom of the Instance Categorizer it gives you a series of filters and offers to guide you on which filter to use based on the problem you are trying to find.  For example, if I suspected a memory leak, I might try to filter for survivors in growing classes.  This means that for instances of a class that are growing in memory (more are being created than cleaned up), which ones are survivors (not collected) from garbage collection.  This might allow me to drill down and find places where I’m holding onto references by mistake and not freeing them! Finally, if you want to really see all your instances and who is holding onto them (preventing collection), you can go to the “Instance Retention Graph” which creates a graph showing what references are being held in memory and who is holding onto them. Visual Studio Integration Of course, VS has its own profiler built in – and for a free bundled profiler it is quite capable – but AMP gives a much cleaner and easier-to-use experience, and when you install it you also get the option of letting it integrate directly into VS. So once you go back into VS after installation, you’ll notice an ANTS menu which lets you launch the ANTS profiler directly from Visual Studio.   Clicking on one of these options fires up the project in the profiler immediately, allowing you to get right in.  It doesn’t integrate with the Visual Studio windows themselves (like the VS profiler does), but still the plethora of information it provides and the clear and concise manner in which it presents it makes it well worth it. Summary If you like the ANTS series of tools, you shouldn’t be disappointed with the ANTS Memory Profiler.  It was so easy to use that I was able to jump in with very little product knowledge and get the information I was looking it for. I’ve used other profilers before that came with 3-inch thick tomes that you had to read in order to get anywhere with the tool, and this one is not like that at all.  It’s built for your everyday developer to get in and find their problems quickly, and I like that! Tweet Technorati Tags: Influencers,ANTS,Memory,Profiler

    Read the article

  • Apache - Include conf Files Relative to ServerConfigFile (-f arg)

    - by Synetech inc.
    Hi, I want to use the -f command-line option for the Apache server so that I can store the conf files in a separate place (a data diectory) from the server binaries. The problem is that I use the Include directive to separate and organize the configurations, but when I use a command like Include "addons/SVN.conf", it fails because Apache looks for addons/SVN.conf in relative to the ServerRoot directory instead of the ServerConfigFile directory. I can work around this by using absolute paths (eg Include "e:\foo\bar\baz\Apache\conf\addons\svn.conf", but I don’t like that since it means I would have to change each and every Include directive if I move the conf folder as opposed to simply changing the -f option. Does anyone know of a way to get the Include directive to work relative to the conf file that Apache is passed. I tried Include "./addons/SVN.conf", but that too was relative to the ServerRoot. This forced relative-to-ServerRoot Include behavior kind of defeats the whole purpose of specifying an alternate config file to the one in ServerRoot/conf. Thanks.

    Read the article

  • Passenger apache default page error

    - by gshankar
    I just installed Passenger and the Passenger Pref Pane on OSX. However, when I try to browse to one of my Rails applications I just get the default Apache "it works!" page. I've checked the vhost definitions and they seem ok so I can't seem to figure out whats wrong... I've tried reinstalling passenger and the pref pane and restarting apache but to no avail. Anyone know how to fix this? My vhost definition looks like this: <VirtualHost *:80> ServerName boilinghot.local DocumentRoot "/Users/ganesh/Code/boilinghot/public" RailsEnv development <Directory "/Users/ganesh/Code/boilinghot/public"> Order allow,deny Allow from all </Directory> </VirtualHost>

    Read the article

  • How do I tar dot files but not dot directories

    - by bjackfly
    The following tar command will exclude all dot files and dot directories. tar -cvzf /media/bjackfly/bkup/bkup.gz --exclude '.*' --one-file-system /home/bjackfly In my case I want the dot files to be backed up in the home directory (.vimrc, .bashrc) etc. but not the dot directories /.config /.cache /.eclipse etc. Any Linux gurus with a command for this, or do I need to run a find into a tar or do two different tar commands which is non-ideal? One for dot files in the home directory and one for everything else?

    Read the article

  • Cron not able to succesfully change background

    - by Solenoid
    I'm running 12.04 with a custom XML background (modification on Day of Ubuntu) that changes based on time of day. I've noticed that there's a significant delay between when the changes are scheduled to take place in the XML file and when they actually show up on the background. I've also noticed that when I resume from suspend I don't get the correct background image either. I've found that cycling the wallpaper manually will fix this, and I've written a script to automate the process. If I execute the script manually it works fine. However, when I schedule the script to run in cron, cron doesn't change the background. To make sure that the script was being run properly by cron, I had it create a directory in my home folder after running the background change, and the directory is created successfully, so I know cron is running and executing the script. My script: #!/bin/bash sleep 5 gsettings set org.gnome.desktop.background picture-uri file:///home/zak/Pictures/Wallpaper/DOU2.xml sleep 1 gsettings set org.gnome.desktop.background picture-uri file:///home/zak/Pictures/Wallpaper/DOU.xml sleep 1 mkdir /home/zak/iscronworking exit Is cron just not able to access gsettings? The job is on my user crontab so it shouldn't be running as root. Alternately, is there any way to make Precise play nicely with XML wallpaper?

    Read the article

< Previous Page | 336 337 338 339 340 341 342 343 344 345 346 347  | Next Page >