Search Results

Search found 25998 results on 1040 pages for 'home folder'.

Page 284/1040 | < Previous Page | 280 281 282 283 284 285 286 287 288 289 290 291  | Next Page >

  • Internet stopped working suddenly on 12.04

    - by Daniel
    My laptop was running smoothly until yesterday. Today, I can't connect to the Internet at home anymore. I am only able to access the router, but no Internet access. A have a Dell Latitude E6320 with Ubuntu 12.04. At my job, I don't have any problems connecting this laptop both via Wireless and Ethernet. At home, if I try connecting it through Windows, it does work fine. I even checked the MAC address and it's OK. My other laptop, which also runs Ubuntu, is not facing this problem. I have already tried to restart and downgrade network-manager package and its dependencies. Can anyone help me please? I am afraid, I will have to reinstall everything.

    Read the article

  • function names - "standartised" prefixes

    - by dnsmkl
    Imagine you have such routines /*just do X. Fail if any precondition is not met*/ doX() /*take care of preconditions and then do X*/ takeCareOfPreconditionsCheckIfNeededAtAllAndThenDoX() A little bit more concrete example: /*create directory. Most probably fail with error if any precondition is not met (folder already exists, parent does not exists)*/ createDirectory(path_name) /*take care of preconditions (creates full path till folder if needed, checks if not exists yet) and then creates the directory*/ CheckIfNotExistsYet_CreateDirectory_andFullPathIfNeeded(path_name) How do you name such routines, so it would be clear what does what? I have come to some my own "convetion" like: naiveCreateDirectory, ForceDirectoryExists, ... But I imagine this is very standard situation. Maybe there already exists some norms/convetions for this?

    Read the article

  • Nginx + PHP-FPM on Centos 6.5 gives me 502 Bad Gateway (fpm error: unable to read what child say: Bad file descriptor)

    - by Latheesan Kanes
    I am setting up a standard LEMP stack. My current setup is giving me the following error: 502 Bad Gateway This is what is currently installed on my server: Here's the configurations I've created/updated so far, can some one take a look at the following and see where the error might be? I've already checked my logs, there's nothing in there (http://i.imgur.com/iRq3ksb.png). And I saw the following in /var/log/php-fpm/error.log file. sidenote: both the nginx and php-fpm has been configured to run under a local account called www-data and the following folders exits on the server nginx.conf global nginx configuration user www-data; worker_processes 6; worker_rlimit_nofile 100000; error_log /var/log/nginx/error.log crit; pid /var/run/nginx.pid; events { worker_connections 2048; use epoll; multi_accept on; } http { include /etc/nginx/mime.types; default_type application/octet-stream; # cache informations about FDs, frequently accessed files can boost performance open_file_cache max=200000 inactive=20s; open_file_cache_valid 30s; open_file_cache_min_uses 2; open_file_cache_errors on; # to boost IO on HDD we can disable access logs access_log off; # copies data between one FD and other from within the kernel # faster then read() + write() sendfile on; # send headers in one peace, its better then sending them one by one tcp_nopush on; # don't buffer data sent, good for small data bursts in real time tcp_nodelay on; # server will close connection after this time keepalive_timeout 60; # number of requests client can make over keep-alive -- for testing keepalive_requests 100000; # allow the server to close connection on non responding client, this will free up memory reset_timedout_connection on; # request timed out -- default 60 client_body_timeout 60; # if client stop responding, free up memory -- default 60 send_timeout 60; # reduce the data that needs to be sent over network gzip on; gzip_min_length 10240; gzip_proxied expired no-cache no-store private auth; gzip_types text/plain text/css text/xml text/javascript application/x-javascript application/xml; gzip_disable "MSIE [1-6]\."; # Load vHosts include /etc/nginx/conf.d/*.conf; } conf.d/www.domain.com.conf my vhost entry ## Nginx php-fpm Upstream upstream wwwdomaincom { server unix:/var/run/php-fcgi-www-data.sock; } ## Global Config client_max_body_size 10M; server_names_hash_bucket_size 64; ## Web Server Config server { ## Server Info listen 80; server_name domain.com *.domain.com; root /home/www-data/public_html; index index.html index.php; ## Error log error_log /home/www-data/logs/nginx-errors.log; ## DocumentRoot setup location / { try_files $uri $uri/ @handler; expires 30d; } ## These locations would be hidden by .htaccess normally #location /app/ { deny all; } ## Disable .htaccess and other hidden files location /. { return 404; } ## Magento uses a common front handler location @handler { rewrite / /index.php; } ## Forward paths like /js/index.php/x.js to relevant handler location ~ .php/ { rewrite ^(.*.php)/ $1 last; } ## Execute PHP scripts location ~ \.php$ { try_files $uri =404; expires off; fastcgi_read_timeout 900; fastcgi_pass wwwdomaincom; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } ## GZip Compression gzip on; gzip_comp_level 8; gzip_min_length 1000; gzip_proxied any; gzip_types text/plain application/xml text/css text/js application/x-javascript; } /etc/php-fpm.d/www-data.conf my php-fpm pool config ## Nginx php-fpm Upstream upstream wwwdomaincom { server unix:/var/run/php-fcgi-www-data.sock; } ## Global Config client_max_body_size 10M; server_names_hash_bucket_size 64; ## Web Server Config server { ## Server Info listen 80; server_name domain.com *.domain.com; root /home/www-data/public_html; index index.html index.php; ## Error log error_log /home/www-data/logs/nginx-errors.log; ## DocumentRoot setup location / { try_files $uri $uri/ @handler; expires 30d; } ## These locations would be hidden by .htaccess normally #location /app/ { deny all; } ## Disable .htaccess and other hidden files location /. { return 404; } ## Magento uses a common front handler location @handler { rewrite / /index.php; } ## Forward paths like /js/index.php/x.js to relevant handler location ~ .php/ { rewrite ^(.*.php)/ $1 last; } ## Execute PHP scripts location ~ \.php$ { try_files $uri =404; expires off; fastcgi_read_timeout 900; fastcgi_pass wwwdomaincom; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } ## GZip Compression gzip on; gzip_comp_level 8; gzip_min_length 1000; gzip_proxied any; gzip_types text/plain application/xml text/css text/js application/x-javascript; } I've got a file in /home/www-data/public_html/index.php with the code <?php phpinfo(); ?> (file uploaded as user www-data).

    Read the article

  • How to set up offline manifest for a web app to run in Safari in iOS?

    - by ahmd1
    I'm currently trying to set up an offline.manifest file for my web app to be used offline on an iOS device. For testing purposes I have a very simple HTML page that I'm trying to add to a home screen. I'm testing it on a live iPhone 4, but after the page is added to the home screen and I put the iPhone in the airplane mode and try to start my web app I get this error: "Turn Off Airplane Mode or Use Wi-Fi to Access Data" and then if I click OK I get: "Cannot Open Web App Name" "Web App Name could not be opened because it is not connected to the Internet" The following is added to the HTML file: <!DOCTYPE html> <html lang="en" manifest="scrts/offline.manifest"> and the offline.manifest is composed as such: CACHE MANIFEST ../pics/bkgnd_iphn_settings.png ../pics/mbl_btn_fb.png ../pics/mbl_btn_twt.png ../pics/icon_57_57_bg.png ../pics/icon_72_72_bg.png ../pics/icon_114_114_bg.png ../pics/icon_144_144_bg.png ../pics/splash_320_460_bg.png ../pics/splash_768_1004_bg.png ../pics/splash_1004_768_bg.png I got all instructions on composing it from here I also adjusted the .htaccess file to add this line: AddType text/cache-manifest .manifest Any idea what am I not doing right?

    Read the article

  • How can i fit 2 commands in 1 terminal shortcut

    - by Nicky Bailuc
    10 latest updates and drivers and I need to run a game called unreal tournament, but in terminal it requires 2 commands The first one is to mount into the folder: cd /usr/local/games/ut2004/ and then the second one is to open the actual game: sudo aoss ./ut2004 In one shortcut i can only fit 1 command but both don't fit in is there any way i can turn these 2 commands into one? Perhaps turni9ng on the desktop shortcut already mounted into the folder? Any help would be really appreciated because im getting kinda sick of using the terminal to run it every time.

    Read the article

  • Getting a virus is *very* annoying

    - by bconlon
    I spent most of yesterday removing an annoying virus from my PC. I feel slightly foolish for getting one in the first place, but after so many years I guess I was always going to eventually succumb. I was also a little surprised at the failure of various tools at removing it. The virus would redirect the browser to websites including ‘licosearch’, ‘hugosearch’ and ‘facebook’, and the disk would be thrashing away infecting dlls in some way. I had the full up to date version of McAfee installed. This identified that there was an issue in some dlls on the system and was able to ‘fix’ them. But they kept getting re-infected. So I installed Microsoft Security Essentials and this too was able to identify and ‘fix’ the infected dlls. The system scans take forever and I really expected better results. I also tried Malwarebytes, Hitman Pro, AVG and Sophos to no avail. Eventually I thought I’d investigate myself. It turned out that on reboot, the virus would start 3 instances of Firefox.exe which I’m guessing would do bad things including infecting as many dlls on the system as possible. I removed Firefox and the virus cleverly then launched 3 instances of Chrome! So I uninstalled Chrome and yes, it then started to launch 3 instances of iexplore.exe. If I’m honest, by this stage I was just seeing if it would be able to use any of the browsers! As it was starting these on reboot, I looked in my User Startup folder and there was a <randomly named>.exe and several log files. I deleted these and rebooted. When I looked they had been recreated. So I then looked in the registry Run and RunOnce entries: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Run. Sure enough there were entries to run a file in C:\Program Files\<random name folder>\<random name file>.exe. I deleted this and rebooted and it was fixed. I also looked in the event log and found a warning that Winlogon had failed to start the file C:\Program Files\<random name folder>\<random name file>.exe So I also checked HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon and this entry had also been changed. Finally I ran a full system scan to clean up any infected dlls. I hope it’s gone for good!  #

    Read the article

  • File access forbidden in htpasswd

    - by Nerd-Herd
    I have been using the htpasswd generated in this question and it seemed to have been working well until recently. Since yesterday, I am not able to access the newest file created in the folder ChatLogs(named 10_07_2012.txt). The server returns a 403 Forbidden error saying: Forbidden You don't have permission to access /ChatLogs/2012/07/10_07_2012.txt on this server. I am still able to access older files(until 09 July, 2012). At first I thought it might be because of file permissions, but they are the same as on other 9 files in the folder. What could be the problem? Please Help.

    Read the article

  • Reinstall Ubuntu on Custom Partitions

    - by Forerunner117
    I am attempting to reinstall ubuntu 13.04 without losing my installed software and /home docs. I have read countless threads on this same topic, but nothing seems to apply to my situation. When I originally installed, I had created a separate partition for /home, but I am now unsure of which partition that was. Based on the picture below, where should I be installing the new copy? Also, will I run into problems since I am now running 13.10 and want to put 13.04 back on it? Should I grab 12.04 or 13.10 for this reinstall? Picture: http://i.stack.imgur.com/FL2SY.png (Note: I am performing this reinstall due to a complete muck up of my unity/compiz settings and configuration, resulting in no desktop. I've done my best to resolve this problem first before resorting to this.)

    Read the article

  • Creating basic ACPI event makes the system unusably slow

    - by skerit
    I want to change a few settings on my laptop when I switch to battery power. I created a new event in /etc/acpi/events/cust-battery and it looks like this: event=battery action=/home/skerit/power.sh I put a simple command in the power.sh file: echo This is a test >> /home/skerit/powertest Now, when I tail this file it shows "This is a test" 4-5 times upon switching to battery power. However, the system becomes totally unstable. It slows down significantly. I can't change anything in the terminal. The terminal and certain parts of the screen (like the gnome system monitor applet) go blank from time to time. What can be the cause of that? It's a simple echo that gets executed a few times!

    Read the article

  • Quick path jumping

    - by Sebastian P.
    I was just at a lecture, where I noticed the lecturer using a command (probably aliased) to jump to a specific folder. Example: ~/code$ j sciproj ~/projects/sciproj2011/$ This looked quite slick, so I started wondering: Is this a standard utility, and if so, what is the name? I have two theories as to how it works: It can both create, delete and jump to aliases directly from the command-line in the style of the example, without having to set up aliases in a configuration file or script or whatnot manually. It searches the home directory for a folder matching the name and jumps to it. The second option seems a bit slow, however, so the first would be preferred.

    Read the article

  • Ubuntu One not syncing and not providing feedback

    - by Joe
    Firstly, I apologize for not being more specific with my problem, but Ubuntu One is just not providing me with any information. It appears to be working, it states that "syncronization in progress..." but it never actually syncronizes (by never I mean 3 days). When I first selected a folder to sync using Ubuntu One it took on the order of hours to sync over 500MB of files - it uploaded the folder hierarchy first and populated the folders over a few hours. That is not happening at all now. Please let me know if there is a way I can get more information out of Ubuntu One that I can post and hopefully resolve this issue. Thanks, Joe

    Read the article

  • How can I instruct nautilus to pre-generate thumbnails?

    - by Glutanimate
    I have a large library of PDF documents (papers, lectures, handouts) that I want to be able to quickly navigate through. For that I need thumbnails. At the same time however, I see that the ~/.thumbnails folder is piling up with thumbs I don't really need. Deleting thumbnail junk without removing the important thumbs is impossible. If I were to delete them, I'd have to go to each and every folder with important PDF documents and let the thumbnail cache regenerate. I would love to able to automate this process. Is there any way I can tell nautilus to pre-cache the thumbs for a set of given directories? Note: I did find a set of bash scripts that appear to do this for pictures and videos, but not for any other documents. Maybe someone more experienced with scripting might be able to adjust these for PDF documents or at least point me in the right direction on what I'd have to modify for this to work with PDF documents as well.

    Read the article

  • Ubuntu One not syncing fully

    - by wurlyfan
    I have uploaded several folders of data to Ubuntu One from my desktop computer, over the last few weeks, and I can see that all the contents are there when I look at my account on the web. When I look at my laptop (connected to the same account), one of the first folders I uploaded hasn't downloaded completely. New folders added from either device seem to sync correctly, but this one older folder remains almost empty, even though the control panel says file syncing is up-to-date. I have plenty of space available. Stopping and restarting the sync daemon and rebooting the laptop are both ineffective. What can I do to make this folder sync fully? I don't want to risk losing the data (which now exists only in Ubuntu One), and I don't have a lot of broadband data to play with. I've seen several bugs relating to this sort of issue but they're all quite old and apparently fixed, while this is happening on new 13.04 installations on both desktop and laptop.

    Read the article

  • The Best How-To Geek Articles for November 2012

    - by Asian Angel
    Last month we covered topics such as why 64-bit Windows needs a separate “Program Files (x86)” folder, how to uninstall your Windows product key before selling your PC, how to deal with locked files in Windows, and more. Join us as we look back at the best articles for November. HTG Explains: Does Your Android Phone Need an Antivirus? How To Use USB Drives With the Nexus 7 and Other Android Devices Why Does 64-Bit Windows Need a Separate “Program Files (x86)” Folder?

    Read the article

  • Compiz setting migration from 12.04 to 12.10

    - by Maksim
    I just make migration my ubuntu 12.04 home folder to fresh installed ubuntu 12.10. And again have problem with compiz. It have no one my custom settings! How it's possible then full copy home dir not transfer settings? Those things began the compiz start play with digits on config folders. like .compiz-1 or .compiz. I have now idea then I have to use this -1 then not. But I've rename my compiz folders in .gconf and .compiz. Which not help. How get my setting back. And why this mess happening?

    Read the article

  • GitLab on a fresh Ubuntu 13 EC2 instance

    - by Polly
    I've spun up a fresh Amazon EC2 instance for a micro Ubuntu 13 server to be used as a GitLab server. I know the specs are a little low, but it should serve well for my purposes. It has an elastic (static) IP address that I have created an A record for git.mydomain.com. The first thing I did to the instance was add 1GB of swap to keep it happy from a memory perspective. I then set the hostname of the box to be git.mydomain.com and followed https://github.com/gitlabhq/gitlabhq/blob/6-2-stable/doc/install/installation.md to the letter. Everything seems to have worked, except for the web server side of things. Doing a gitlab:check shows the following: Checking Environment ... Git configured for git user? ... yes Has python2? ... yes python2 is supported version? ... yes Checking Environment ... Finished Checking GitLab Shell ... GitLab Shell version >= 1.7.4 ? ... OK (1.7.4) Repo base directory exists? ... yes Repo base directory is a symlink? ... no Repo base owned by git:git? ... yes Repo base access is drwxrws---? ... yes update hook up-to-date? ... yes update hooks in repos are links: ... can't check, you have no projects Running /home/git/gitlab-shell/bin/check Check GitLab API access: /usr/local/lib/ruby/2.0.0/net/http.rb:878:in `initialize': Connection refused - connect(2) (Errno::ECONNREFUSED) from /usr/local/lib/ruby/2.0.0/net/http.rb:878:in `open' from /usr/local/lib/ruby/2.0.0/net/http.rb:878:in `block in connect' from /usr/local/lib/ruby/2.0.0/timeout.rb:52:in `timeout' from /usr/local/lib/ruby/2.0.0/net/http.rb:877:in `connect' from /usr/local/lib/ruby/2.0.0/net/http.rb:862:in `do_start' from /usr/local/lib/ruby/2.0.0/net/http.rb:851:in `start' from /home/git/gitlab-shell/lib/gitlab_net.rb:62:in `get' from /home/git/gitlab-shell/lib/gitlab_net.rb:29:in `check' from /home/git/gitlab-shell/bin/check:11:in `<main>' gitlab-shell self-check failed Try fixing it: Make sure GitLab is running; Check the gitlab-shell configuration file: sudo -u git -H editor /home/git/gitlab-shell/config.yml Please fix the error above and rerun the checks. Checking GitLab Shell ... Finished Checking Sidekiq ... Running? ... yes Number of Sidekiq processes ... 1 Checking Sidekiq ... Finished Checking GitLab ... Database config exists? ... yes Database is SQLite ... no All migrations up? ... yes GitLab config exists? ... yes GitLab config outdated? ... no Log directory writable? ... yes Tmp directory writable? ... yes Init script exists? ... yes Init script up-to-date? ... yes projects have namespace: ... can't check, you have no projects Projects have satellites? ... can't check, you have no projects Redis version >= 2.0.0? ... yes Your git bin path is "/usr/bin/git" Git version >= 1.7.10 ? ... yes (1.8.3) Checking GitLab ... Finished It seems like I'm very nearly there. Searching on this error I have only found advice that unfortunately hasn't helped. I'm not using any kind of SSL setup, which a lot of the posts I found were about. I have tried appending 127.0.0.1 git.mydomain.com to /etc/hosts and giving the instance a reboot but there was no change. My config/gitlab.yml file has host: git.mydomain.com in it, and my gitlab-shell/config.yml has gitlab_url: "http://git.mydomain.com/" in it. I'm sure I'm missing something simple, but I've been through every relevant link I can find and have had no positive results; thank you in advance for any help!

    Read the article

  • Is adding in the header the license type enough to say: "my code is licensed"?

    - by silverfox
    I read on various sites about licenses. I did just put the license type in the header file (in my case a javascript file, open-source): /* * "codeName" "version" * http://officialsite.com/ * * Copyright 2012 "codeName" * Released under the "LICENSE NAME" license * http://officialsite.com/LICENSE NAME */ javascript code ... In the same folder I leave a copy of the license. The listing of the folder looks like this: * codeName.js * LICENSE In the file LICENSE is the full text of the license my code uses. What I cannot find anywhere that says is this is enough to say my code is licensed (the case of open-source). Is something more required?

    Read the article

  • Ubuntu samba file and print server

    - by Gerd
    I am using an old Dell desktop PC as a samba file and print server for a Windows 7 network. The version of Unbuntu I am running is 12.04 which seems to big for the RAM and disk space I have available. Unbuntu 12.04 runs very very slow, the printing over the home network works at a quite acceptable speed. I have two problems: SLOW Ubuntu and I cannot update samba. The update manager tells me "update failed". So my two questions: Is there anything I can remove from the standard 12.04 installation to make Ubuntu run faster and should I uninstall and the reinstall samba to overcome the "update failed" problem? I hesitate, as printing and file serving works somehow and I don't want to loose that. I know I should give you more information to give me answers. But what would you need to know? I am not a natural Linux user. Windows 7 is still my home. Thanks in advance, Gerd

    Read the article

  • Lubuntu 14.04 Problem starting lxsession-default-apps

    - by user278179
    I have one problem, I can't execute lxsession-default-apps on Lubuntu 14.04 because I get because said to me "The database is updating, please wait" If I try to run lxsession-default-apps, I get this error: ** Message: utils.vala:30: config_path_directory: /home/USER/.config/lxsession-default-apps ** Message: desktop-files-backend.vala:171: test config_path: /home/USER/.config/lxsession-default-apps/settings.conf ** Message: desktop-files-backend.vala:237: Scanning folder: /usr/share/applications ** Message: desktop-files-backend.vala:278: Start scanning ** Message: desktop-files-backend.vala:257: Scanning folder: /usr/share/app-install/desktop ** Message: desktop-files-backend.vala:278: Start scanning Error: list_files failed: No such file or directory ** Message: desktop-files-backend.vala:333: Finishing scanning ** Message: desktop-files-backend.vala:189: Signal finish scanning with mode: write ** Message: desktop-files-backend.vala:333: Finishing scanning Any help would be appreciated. Thanks. Regards.

    Read the article

  • How to access localhost remotely - Wordpress?

    - by Marcappuccino
    I have installed a LAMP stack (sudo tasksel install lamp-server) and wordpress (sudo apt-get install wordpress), but now, I would like to access my server remotely, to be used as a home fileserver. For example, my public ip is 82.16.xxx.xxx. Opening it in firefox with the suffix :8080 brings up my router config page..? Do I have to set up port forwarding? BTW. Accessing via localhost/wordpress/ works fine, I would just like to access my files while away from home.

    Read the article

  • "Failed to mount Windows share" error in Samba

    - by Ranjith R
    This is the situation. There are 3 machines in the office. The Operating systems on them are respectively, Linux mint Ubuntu 12.04 Windows Vista The Ubuntu (#2) machine is supposed to be the common file server between the machines #1 and #3. Machine #2 has two hard disks. One is a 500 GB NTFS empty drive and the other is a 160 GB ext4 drive. My plan is to make the 500 GB as the file sharing disk. When I share a folder like ~/Documents using Nautilus context menu on machine #2, I can access the files easily on both #1 and #3, but when I try to share some folder on 500 GB disk, I get an error on machine #1 that says Failed to mount windows share I do not mind formatting the drive to ext4 if needed, but I am sure that something simple is wrong. EDIT I took @Marty's comment as a hint and used ntfs-config to configure automount of that partition. It is working now. Thanks

    Read the article

  • Microsoft Office 2013 Takes New Approach

    You can check out an article from Computerworld for a good look at the questions and answers about the new software. For instance, you've probably noticed that I'm not giving the full name. That's because Microsoft seems to be using several names. If you go the traditional route and pay the one-time upfront fee for the shrink-wrapped edition, it's Office 2013. There's also a tablet version called Office Home and Student 2013 RT - but that won't include the iPad, or at least not at first. The consumer preview, which I'll be linking to in a minute, is dubbed Office 365 Home Premium. There ...

    Read the article

  • Disable automatic starting of sshd?

    - by b.long
    Simple question here; what's the correct way to stop the sshd service from starting when the OS boots ? I'm not sure if this answer is correct, so I'm hoping some guru(s) can help me out! What I'd like is a configuration that (after boot) allows me to start the service using sudo service ssh start when necessary. Version info: me@home:~$ ssh -V OpenSSH_5.9p1 Debian-5ubuntu1, OpenSSL 1.0.1 14 Mar 2012 me@home:~$ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 12.04.1 LTS Release: 12.04 Codename: precise

    Read the article

  • Nautilus left pane does not expand

    - by dn.usenet
    I would prefer the left pane of Nautilus to behave like Windows file manager. It should have expandable/collapsible trees, and if I have /home/mydir-1, /home/mydir-2, I should be able to see them both in the left pane. When I click on one of them, the files in that dir should show in the right pane. If Nautilus can't do it, please suggest a better file-manager which does. I would rather not open 3 panes in Nautilus to do what two panes do just fine in Windows File Manager. Secondly how can I open two instances of Nautilus? And if it isn't possible with Nautilus, could it be done with some other file manager?

    Read the article

  • Function names - "standardized" prefixes

    - by dnsmkl
    Imagine you have such routines /*just do X. Fail if any precondition is not met*/ doX() /*take care of preconditions and then do X*/ takeCareOfPreconditionsCheckIfNeededAtAllAndThenDoX() A little bit more concrete example: /*create directory. Most probably fail with error if any precondition is not met (folder already exists, parent does not exists)*/ createDirectory(path_name) /*take care of preconditions (creates full path till folder if needed, checks if not exists yet) and then creates the directory*/ CheckIfNotExistsYet_CreateDirectory_andFullPathIfNeeded(path_name) How do you name such routines, so it would be clear what does what? I have come to some my own "convetion" like: naiveCreateDirectory, ForceDirectoryExists, ... But I imagine this is very standard situation. Maybe there already exists some norms/convetions for this?

    Read the article

< Previous Page | 280 281 282 283 284 285 286 287 288 289 290 291  | Next Page >