Search Results

Search found 50510 results on 2021 pages for 'static files'.

Page 214/2021 | < Previous Page | 210 211 212 213 214 215 216 217 218 219 220 221  | Next Page >

  • disk space keeps filling up on EC2 instance with no apperent files/directories

    - by sasher
    How come os shows 6.5G used but I see only 3.6G in files/directories? Running as root on an Amazon Linux AMI (seems like Centos), lots of free memory available, no swapping going on, no apparent file descriptors issue. The only thing I can think of is a log file that was deleted while applications append to it. Disk space usage is slowly but continuously rising towards full capacity (~1k/min with very small decreases from time to time) Any explanation? Solution? du --max-depth=1 -h / 1.2G /usr 4.0K /cgroup 22M /lib64 11M /sbin 19M /etc 52K /dev 2.1G /var 4.0K /media 0 /sys 4.0K /selinux du: cannot access /proc/14024/task/14024/fd/4': No such file or directory du: cannot access<br/> /proc/14024/task/14024/fdinfo/4': No such file or directory du: cannot access /proc/14024/fd/4': No such file or directory du: cannot<br/> access/proc/14024/fdinfo/4': No such file or directory 0 /proc 18M /home 4.0K /logs 8.1M /bin 16K /lost+found 12M /tmp 4.0K /srv 35M /boot 79M /lib 56K /root 67M /opt 4.0K /local 4.0K /mnt 3.6G / df -h Filesystem Size Used Avail Use% Mounted on /dev/xvda1 7.9G 6.5G 1.4G 84% / tmpfs 3.7G 0 3.7G 0% /dev/shm sysctl fs.file-nr fs.file-nr = 864 0 761182

    Read the article

  • The script not working as expected files dump path

    - by user3319390
    I have a script needs to be dump matching cname from my file contains and then matching scode to dump file to $cname/$year/$month/$day/ into files like access and error logs #!/bin/sh #base_dir="/home/vizion/Desktop" path="/home/vizion/Desktop/adn_DF9D_20140515_0005.log" name=$(basename "$path" ".log") for x in *.log; do year=${x:9:4}; month=${x:13:2}; day=${x:15:2}; done while read -r line do cname=$(echo ${line} | awk '{split($7,c,"/"); print c[3]}') scode=$(echo ${line} | awk -F"[ ]" '{print $9}') [[ ! -d "$cname/$year/$month/$day" ]] && mkdir -p "$cname/$year/$month/$day/" [[ ( ${scode} -ge 200 ) && ( ${scode} -le 399 ) ]] && { # [[ ! -d "$cname/$year/$month/$day" ]] && mkdir -p "$cname/$year/$month/$day/" echo ${line} >> /home/vizion/Desktop/$cname/$year/$month/$day/${cname}_${name}_access.log } [[ ( ${scode} -ge 400 ) && ( ${scode} -le 599 ) ]] && { [[ ! -d "$cname/$year/$month/$day" ]] && mkdir -p "$cname/$year/$month/$day" echo ${line} >> ${cname}_${name}_error.log } done < $path i am able to filter logs but not not dumping the exact location It's going other locations suggest to me correction in script

    Read the article

  • Folder with files shows as empty

    - by ProfKaos
    Last night I create a new project in Visual Studio 2010, in the folder c:\development\hobby\devices. When I navigate to that folder in a new instance of Windows Explorer, I get the 'This folder is empty.' message in the files pane. If I then, from within Visual Studio, issue the "Open containing folder command", explorer opens to the folder c:\development\hobby\devices\LoopBack, where loopback is a project under the main folder. If I paste that path into another new instance of explorer, explorer correctly opens c:\development\hobby\devices. How is this possible? SOLVED: This is what comes of typing paths instead of selecting them. Some elementary detective work yielded the following interesting results: C:\>dir dev*. Volume in drive C is OS Volume Serial Number is E4B4-0563 Directory of C:\ 2010/11/28 09:20 PM <DIR> Development 2010/12/01 08:57 PM <DIR> Develpment 2010/11/02 06:31 PM <DIR> DevTools 0 File(s) 0 bytes 3 Dir(s) 63 965 368 320 bytes free Where my disappearing project was in the badly spelt one.

    Read the article

  • Access denied when trying to open files/folders after reinstall [closed]

    - by user711532
    Possible Duplicate: Access Denied when saving a file in Windows 7 I installed Windows 7 fresh on a new machine. Now when I unarchive (winrar or 7z etc. ) to Program files (x86) (for example), access denied. Even if I copy a file to a folder I installed an app to it is still access denied. I checked the security, it looks like full control is given to the creator - this is weird as I never ran across this before (same version of Windows 7 - its just a fresh install after some new hardware). It is the same effect as if I was editing the hosts file, and you do not use "run as admin" you will not be able to save it, yo will have to save it somewhere else. This "file copy" issue I ran into is the same. I could change all these permissions, however this is something I never had to do before. I am the admin, why did the install, not give me "full control"? How can this be globally fixed. I cannot change the permissions - they are greyed out - so that is weird as well. If I was a standard reason, It would make sense, however, again, I am the admin.

    Read the article

  • Web Site Performance and Assembly Versioning

    - by capgpilk
    I originally wanted to write this post in one, but there is quite a large amount of information which can be broken down into different areas, so I am going to publish it in three posts. Minification and Concatination of JavaScript and CSS Files – this post Versioning Combined Files Using Subversion – published shortly Versioning Combined Files Using Mercurial – published shortly Website Performance There are many ways to improve web site performance, two areas are reducing the amount of data that is served up from the web server and reducing the number of files that are requested. Here I will outline the process of minimizing and concatenating your javascript and css files automatically at build time of your visual studio web site/ application. To edit the project file in Visual Studio, you need to first unload it by right clicking the project in Solution Explorer. I prefer to do this in a third party tool such as Notepad++ and save it there forcing VS to reload it each time I make a change as the whole process in Visual Studio can be a bit tedious. Now you have the project file, you will notice that it is an MSBuild project file. I am going to use a fantastic utility from Microsoft called Ajax Minifier. This tool minifies both javascript and css. 1. Import the tasks for AjaxMin choosing the location you installed to. I keep all third party utilities in a Tools directory within my solution structure and source control. This way I know I can get the entire solution from source control without worrying about what other tools I need to get the project to build locally. 1: <Import Project="..\Tools\MicrosoftAjaxMinifier\AjaxMin.tasks" /> 2. Now create ItemGroups for all your js and css files like this. Separating out your non minified files and minified files. This can go in the AfterBuild container. 1: <Target Name="AfterBuild"> 2:  3: <!-- Javascript files that need minimizing --> 4: <ItemGroup> 5: <JSMin Include="Scripts\jqModal.js" /> 6: <JSMin Include="Scripts\jquery.jcarousel.js" /> 7: <JSMin Include="Scripts\shadowbox.js" /> 8: </ItemGroup> 9: <!-- CSS files that need minimizing --> 10: <ItemGroup> 11: <CSSMin Include="Content\Site.css" /> 12: <CSSMin Include="Content\themes\base\jquery-ui.css" /> 13: <CSSMin Include="Content\shadowbox.css" /> 14: </ItemGroup>   1: <!-- Javascript files to combine --> 2: <ItemGroup> 3: <JSCat Include="Scripts\jqModal.min.js" /> 4: <JSCat Include="Scripts\jquery.jcarousel.min.js" /> 5: <JSCat Include="Scripts\shadowbox.min.js" /> 6: </ItemGroup> 7: <!-- CSS files to combine --> 8: <ItemGroup> 9: <CSSCat Include="Content\Site.min.css" /> 10: <CSSCat Include="Content\themes\base\jquery-ui.min.css" /> 11: <CSSCat Include="Content\shadowbox.min.css" /> 12: </ItemGroup>   3. Call AjaxMin to do the crunching. 1: <Message Text="Minimizing JS and CSS Files..." Importance="High" /> 2: <AjaxMin JsSourceFiles="@(JSMin)" JsSourceExtensionPattern="\.js$" 3: JsTargetExtension=".min.js" JsEvalTreatment="MakeImmediateSafe" 4: CssSourceFiles="@(CSSMin)" CssSourceExtensionPattern="\.css$" 5: CssTargetExtension=".min.css" /> This will create the *.min.css and *.min.js files in the same directory the original files were. 4. Now concatenate the minified files into one for javascript and another for css. Here we write out the files with a default file name. In later posts I will cover versioning these files the same as your project assembly again to help performance. 1: <Message Text="Concat JS Files..." Importance="High" /> 2: <ReadLinesFromFile File="%(JSCat.Identity)"> 3: <Output TaskParameter="Lines" ItemName="JSLinesSite" /> 4: </ReadLinesFromFile> 5: <WriteLinestoFile File="Scripts\site-script.combined.min.js" Lines="@(JSLinesSite)" 6: Overwrite="true" /> 7: <Message Text="Concat CSS Files..." Importance="High" /> 8: <ReadLinesFromFile File="%(CSSCat.Identity)"> 9: <Output TaskParameter="Lines" ItemName="CSSLinesSite" /> 10: </ReadLinesFromFile> 11: <WriteLinestoFile File="Content\site-style.combined.min.css" Lines="@(CSSLinesSite)" 12: Overwrite="true" /> 5. Save the project file, if you have Visual Studio open it will ask you to reload the project. You can now run a build and these minified and combined files will be created automatically. 6. Finally reference these minified combined files in your web page. In the next two posts I will cover versioning these files to match your assembly.

    Read the article

  • Munin not creating HTML files in Ubuntu Server 14.04

    - by lepe
    I have used munin in several servers and this is the first time is taking me so much time to set it up. When I telnet munin directly, I can list the services, there is no error at the logs and munin its being updated every 5 minutes. However no html files are created. I'm using the default location (/var/cache/munin/www) and I can confirm the permissions of that directory are set to munin.munin (IP and domain has been changed) munin.conf: dbdir /var/lib/munin htmldir /var/cache/munin/www logdir /var/log/munin rundir /var/run/munin [example.com;] address 100.100.50.200 munin-node.conf: log_level 4 log_file /var/log/munin/munin-node.log pid_file /var/run/munin/munin-node.pid background 1 setsid 1 user root group root host_name example.com allow ^127\.0\.0\.1$ allow ^100\.100\.50\.200$ allow ^::1$ /etc/hosts : 100.100.50.200 example.com 127.0.0.1 localhost $ telnet example.com 4949 Trying 100.100.50.200... Connected to example.com. Escape character is '^]'. # munin node at example.com list apache_accesses apache_processes apache_volume cpu cpuspeed df df_inode entropy fail2ban forks fw_packets if_err_eth0 if_err_eth1 if_eth0 if_eth1 interrupts ipmi_fans ipmi_power ipmi_temp irqstats load memory munin_stats mysql_bin_relay_log mysql_commands mysql_connections mysql_files_tables mysql_innodb_bpool mysql_innodb_bpool_act mysql_innodb_insert_buf mysql_innodb_io mysql_innodb_io_pend mysql_innodb_log mysql_innodb_rows mysql_innodb_semaphores mysql_innodb_tnx mysql_myisam_indexes mysql_network_traffic mysql_qcache mysql_qcache_mem mysql_replication mysql_select_types mysql_slow mysql_sorts mysql_table_locks mysql_tmp_tables ntp_2001:e40:100:208::123 ntp_91.189.94.4 ntp_kernel_err ntp_kernel_pll_freq ntp_kernel_pll_off ntp_offset ntp_states open_files open_inodes postfix_mailqueue postfix_mailvolume proc_pri processes swap threads uptime users vmstat fetch df _dev_sda3.value 2.1762874086869 _sys_fs_cgroup.value 0 _run.value 0.0503536980635825 _run_lock.value 0 _run_shm.value 0 _run_user.value 0 _dev_sda5.value 0.0176986285727571 _dev_sda8.value 1.08464646179852 _dev_sda7.value 0.0346633563514803 _dev_sda9.value 6.81031810822797 _dev_sda6.value 9.0932802215469 . /var/log/munin/munin-node.log Process Backgrounded 2014/08/16-14:13:36 Munin::Node::Server (type Net::Server::Fork) starting! pid(19610) Binding to TCP port 4949 on host 100.100.50.200 with IPv4 2014/08/16-14:23:11 CONNECT TCP Peer: "[100.100.50.200]:55949" Local: "[100.100.50.200]:4949" 2014/08/16-14:36:16 CONNECT TCP Peer: "[100.100.50.200]:56209" Local: "[100.100.50.200]:4949" /var/log/munin/munin-update.log ... 2014/08/16 14:30:01 [INFO]: Starting munin-update 2014/08/16 14:30:01 [INFO]: Munin-update finished (0.00 sec) 2014/08/16 14:35:02 [INFO]: Starting munin-update 2014/08/16 14:35:02 [INFO]: Munin-update finished (0.00 sec) 2014/08/16 14:40:01 [INFO]: Starting munin-update 2014/08/16 14:40:01 [INFO]: Munin-update finished (0.00 sec) $ ls -la /var/cache/munin/www/ drwxr-xr-x 3 munin munin 19 Aug 16 13:55 . drwxr-xr-x 3 root root 16 Aug 16 13:54 .. drwxr-xr-x 2 munin munin 4096 Aug 16 13:55 static Any ideas on why it is not working? EDIT This is how /var/log/munin/ log looks like after some days: -rw-r----- 1 www-data 0 Aug 16 13:54 munin-cgi-graph.log -rw-r----- 1 www-data 0 Aug 16 13:54 munin-cgi-html.log -rw-rw-r-- 1 munin 0 Aug 16 13:55 munin-html.log -rw-r----- 1 munin 0 Aug 19 06:18 munin-limits.log -rw-r----- 1 munin 15K Aug 18 14:10 munin-limits.log.1 -rw-r----- 1 munin 1.8K Aug 18 06:15 munin-limits.log.2.gz -rw-rw-r-- 1 munin 1.3K Aug 17 06:15 munin-limits.log.3.gz -rw-r--r-- 1 root 6.5K Aug 16 13:55 munin-node-configure.log -rw-r--r-- 1 root 0 Aug 17 06:18 munin-node.log -rw-r--r-- 1 root 420 Aug 16 14:52 munin-node.log.1.gz -rw-r----- 1 munin 0 Aug 19 06:18 munin-update.log -rw-r----- 1 munin 11K Aug 18 14:10 munin-update.log.1 -rw-r----- 1 munin 1.6K Aug 18 06:15 munin-update.log.2.gz -rw-rw-r-- 1 munin 1.5K Aug 17 06:15 munin-update.log.3.gz

    Read the article

  • Can JNI handle any dll files (Windows)?

    - by henry
    I am new to JNI. And have a few questions : Can JNI handle every type dll exists in windows? I wanted to link a library but it gives me error. Is it possible JNI and the dll are not compatible? Excerpt from VB .NET (It works) Private Declare Function ConnectReader Lib "rfidhid.dll" () As Integer Private Declare Function DisconnectReader Lib "rfidhid.dll" () As Integer Private Declare Function SetAntenna Lib "rfidhid.dll" (ByVal mode As Integer) As Integer Full Code From Java public class MainForm { /** * @param args */ public native int ConnectReader(); public static void main(String[] args) { // TODO Auto-generated method stub MainForm mf = new MainForm(); System.out.println(mf.ConnectReader()); } static { System.loadLibrary("rfidhid"); } } Error code shown Exception in thread "main" java.lang.UnsatisfiedLinkError: MainForm.ConnectReader()I at MainForm.ConnectReader(Native Method) at MainForm.main(MainForm.java:13) Can anyone point to me where I might do wrong

    Read the article

  • How to get latest files in C#

    - by Newbie
    I have some files with same name but with different dates. Basically we are finding the files with most recent dates The file patterns are <FileNames><YYYYMMDD><FileExtension> e.g. test_20100506.xls indicates <FileNames> = test_ <YYYYMMDD> = 20100506 <FileExtension> = .xls Now in the source folder, the files are standardization_20100503.xls, standardization_20100504.xls, standardization_20100305.xls, replacement_20100505.xls As can be seen that, standardization_.xls are 3 in numbers but replacement_.xls is only 1. The output will be a list of file names whose content will be standardization_20100504.xls and replacement_20100505.xls Because among all the standardization_.xls is the most recent one and replacement_.xls is also the same. I have tried with my own logic but somehow failed. My idea is as under private static void GetLatestFiles(ref List<FileInfo> validFiles) { List<FileInfo> validFilesTemp = new List<FileInfo>(); for (int i = 0; i < validFiles.Count; i++) { for (int j = i+1; j < validFiles.Count; j++) { string getFileTextX = ExtractText(validFiles[i].Name); string getFileTextY = ExtractText(validFiles[j].Name); if (getFileTextX == getFileTextY) { int getFileDatesX = Convert.ToInt32(ExtractNumbers(validFiles[i].Name)); int getFileDatesY = Convert.ToInt32(ExtractNumbers(validFiles[j].Name)); if (getFileDatesX > getFileDatesY) { validFilesTemp.Add(validFiles[i]); } else { validFilesTemp.Add(validFiles[j]); } } } } validFiles.Clear(); validFiles = validFilesTemp; } The ExtractNumbers is: public static string ExtractNumbers(string expr) { return string.Join(null, System.Text.RegularExpressions.Regex.Split(expr, "[^\\d]")); } and the ExtractText is public static string ExtractText(string expr) { return string.Join(null, System.Text.RegularExpressions.Regex.Split(expr, "[\\d]")); } I am using c#3.0 and framework 3.5 Help needed .. it's very urgent Thanks .

    Read the article

  • How to associate document files with MS Office 2010 Beta?

    - by Semyon Perepelitsa
    I've installed MS Office 2010 Beta (OneClick technology). All apps launch from 1 program, Word for example has this link: "C:\Program Files (x86)\Common Files\microsoft shared\Virtualization Handler\CVH.EXE" "Microsoft Word 2010 (Beta) 2014006204190000" Or OneNote: "C:\Program Files (x86)\Common Files\microsoft shared\Virtualization Handler\CVH.EXE" "Microsoft OneNote 2010 (Beta) 2014006204190000" Because of that I can't associate files with Office programs in file properties, they actually associate with “Microsoft Office Client Virtualization Handler” (CVH.EXE). Anyone know another way to do that?

    Read the article

  • Creating a USB stick for installing centos 6.x using DVD1 and DVD2 iso files

    - by user250563
    First, we create 2 partitions on the USB stick that is let's say 16GB. first partition is let's say only 1GB and the second partition is the rest of what is available. after we "w" write the changes, the USB now has 2 partitions. 1 is 1GB 1 is more than 14GB so , we have... sdb1 and sdb2 now. now we need to turn these partitions into filesystems some say i should run these commands after those procedures. mkfs.vfat -F 32 /dev/sdb1 mkfs.ext3 /dev/sdb2 but some web pages recommend using: mkfs.vfat -n BOOT /dev/sdb1 mkfs.ext2 -m 0 -b 4096 -L DATA /dev/sdb2 which is it? so let's say the DVDs are called: CentOS-6.4-x86_64-bin-DVD1.iso CentOS-6.4-x86_64-bin-DVD2.iso so we make a directory: mkdir -p /mnt/dvd1 and then mount it: mount -o loop CentOS-6.4-x86_64-bin-DVD1.iso /mnt/dvd1 and i suppose we don't make a directory for dvd2 and we don't have to mount it ? at this point i do not know what should be done. but i think this step might be next: we make the USB bootable by finding the file named mbr.bin and then moving it to there via these commnad. dd conv=notrunc bs=440 count=1 if=/usr/lib/syslinux/mbr.bin of=/dev/sdb parted /dev/sdb set 1 boot on in other words we are "dd-ing it to 'sdb' not sdb1' or 'sdb2'. and then we use parted to set the boot to on for sdb so far everything looks good? here is the confusing parts.. how exactly do i move these iso files to the usb drive? EVERYTHING BELOW IS A GUESS. so at this point i should copy the folder /mnt/dvd1/isolinux to usb's sdb1 or sdb2 ? rename it to syslinux ? and then inside this syslinux folder there will be a file called... isolinux.cfg ? which should be renamed to syslinux.cfg ? and then copy the contents of /mnt/dvd1/images/* to USB's sdb2 ? but i think i am also suppose to copy and paste the both CentOS-6.4-x86_64-bin-DVD1.iso CentOS-6.4-x86_64-bin-DVD2.iso somewhere into this USB's sdb2 partition, correct ? almost like a drag and drop kind of a thing? or do they go into any folders ? centos' own web site has some instructions but those instructions do not work. http://wiki.centos.org/HowTos/InstallFromUSBkey i once got this working but things got ruined, i have to do it again and this time take notes.

    Read the article

  • How do I make IE8 NOT delete temporary internet files?

    - by Josh
    Every time I close Internet Explorer, all temporary files (including cookies) are deleted. IE has a setting for this (Tools Internet Options Advanced Security Empty Temporary Internet Files folder when browser is closed) but the setting is turned off. I tried cycling it on and off again, with no luck. I can open the Temporary Internet Files folder and watch all the files vanish each time IE closes. How can I get the temporary files to stay where they belong?

    Read the article

  • Why do my files fail checksums when transferring to esata dive but not when transfered off the external?

    - by R. Peterson
    I have been using teracopy to verify and the large movie files show artifacting after transfer and fail checksum, small files do not fail, only large files seem to. When I transfer files off the external hard drive no such failure of checksums occur on any of the files, could this be a bad cable, or maybe a bad external interface or esata interface on my computer, I have tried two interfaces for esata, one with a pci card the other on the motherboard, both with similar results, so what maybe the reason if a bad hard drive or external case maybe the problem?

    Read the article

  • HTTP Content-type header for cached files

    - by Brian
    Hello, Using Apache with mod_rewrite, when I load a .css or .js file and view the HTTP headers, the Content-type is only set correctly the first time I load it - subsequent refreshes are missing Content-type altogether and it's creating some problems for me. Specifically, gzip is not compressing these files. I can get around this by appending a random query string value to the end of each filename, eg. http://www.site.com/script.js?12345 However, I don't want to have to do that, since caching is good and all I want is for the Content-type to be present. I've tried using a RewriteRule to force the type but still didn't solve the problem. Any ideas? Thanks, Brian More Details: HTTP headers WITHOUT random query string value: http://localhost/script.js GET /script.js HTTP/1.1 Host: localhost User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 Accept: */* Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Referer: http://localhost/ Cookie: PHPSESSID=ke3p35v5qbus24che765p9jni5; If-Modified-Since: Thu, 29 Apr 2010 15:49:56 GMT If-None-Match: "3440e9-119ed-485621404f100" Cache-Control: max-age=0 HTTP/1.1 304 Not Modified Date: Thu, 29 Apr 2010 20:19:44 GMT Server: Apache/2.2.14 (Unix) mod_ssl/2.2.14 OpenSSL/0.9.8l DAV/2 PHP/5.3.1 Connection: Keep-Alive Keep-Alive: timeout=5, max=100 Etag: "3440e9-119ed-485621404f100" Vary: Accept-Encoding X-Pad: avoid browser bug HTTP headers WITH random query string value: http://localhost/script.js?c947344de8278053f6edbb4365550b25 GET /script.js?c947344de8278053f6edbb4365550b25 HTTP/1.1 Host: localhost User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 Accept: */* Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Referer: http://localhost/ Cookie: PHPSESSID=ke3p35v5qbus24che765p9jni5; HTTP/1.1 200 OK Date: Thu, 29 Apr 2010 20:14:40 GMT Server: Apache/2.2.14 (Unix) mod_ssl/2.2.14 OpenSSL/0.9.8l DAV/2 PHP/5.3.1 Last-Modified: Thu, 29 Apr 2010 15:49:56 GMT Etag: "3440e9-119ed-485621404f100" Accept-Ranges: bytes Vary: Accept-Encoding Content-Encoding: gzip Content-Length: 24605 Keep-Alive: timeout=5, max=100 Connection: Keep-Alive Content-Type: application/javascript

    Read the article

  • Unable to locate Windows Server Error log files

    - by Sam007
    I am getting an error in my application on 500 Internal Server Error. The firebug gives me, NetworkError: 500 Internal Server Error - http://webgis.arizona.edu/ArcGIS/rest/services/webGIS/Shock_Models/GPServer/Income_Log/jobs/jc09c501156564f71abc5d98393581267/results/final_shp?dpi=96&transparent=true&format=png8&imageSR=102100&f=image&bbox=%7B%22xmin%22%3A-14519891.438356264%2C%22ymin%22%3A637618.0139790997%2C%22xmax%22%3A-6692739.741956295%2C%22ymax%22%3A6507981.786279075%2C%22spatialReference%22%3A%7B%22wkid%22%3A102100%7D%7D&bboxSR=102100&size=800%2C600 And when I goto that particular link, I get this error, Server Error - Object reference not set to an instance of an object. Any idea how I can correct it? UPDATE I was told to use fiddler and see the error details that is occurring over the network and this is the output that I got, SESSION STATE: Done. Response Entity Size: 849 bytes. == FLAGS ================== BitFlags: [ClientPipeReused, ServerPipeReused] 0x18 X-CLIENTPORT: 2010 X-RESPONSEBODYTRANSFERLENGTH: 849 X-EGRESSPORT: 2023 X-HOSTIP: 128.196.53.161 X-PROCESSINFO: firefox:2248 X-CLIENTIP: 127.0.0.1 X-SERVERSOCKET: REUSE ServerPipe#2 == TIMING INFO ============ ClientConnected: 15:53:51.383 ClientBeginRequest: 15:53:51.494 GotRequestHeaders: 15:53:51.494 ClientDoneRequest: 15:53:51.494 Determine Gateway: 0ms DNS Lookup: 0ms TCP/IP Connect: 0ms HTTPS Handshake: 0ms ServerConnected: 15:52:45.077 FiddlerBeginRequest: 15:53:51.495 ServerGotRequest: 15:53:51.495 ServerBeginResponse: 15:53:51.679 GotResponseHeaders: 15:53:51.679 ServerDoneResponse: 15:53:51.679 ClientBeginResponse: 15:53:51.679 ClientDoneResponse: 15:53:51.679 Overall Elapsed: 00:00:00.1850106 The response was buffered before delivery to the client. == WININET CACHE INFO ============ This URL is not present in the WinINET cache. [Code: 2] * Note: Data above shows WinINET's current cache state, not the state at the time of the request. * Note: Data above shows WinINET's Medium Integrity (non-Protected Mode) cache only. But I am still confused as to what the error is? This is the application. I am not sure if the error is due to the ArcGIS-Server or the Windows Server 2008. I am new on working with the Windows Server and wanted to know where can I look for the error lof files? This is the link which gives the details and the log info of the job executed. This is the output.

    Read the article

  • Alternative Windows Offline Files + Windows Backup + Previous Version Setup

    - by Herson
    Currently our documents are all hosted in a Windows 7 box. Users can access the files using Windows share and the documents are available offline (windows 7 feature). The documents are being backed up daily by Windows 7 backup and restore utility. Users can access previous versions of the file (from the backups) using Windows Explorer "previous versions" feature. This setup is currently working well, except for the following: We would prefer to have access to hourly versions of the file, not daily. The previous version mechanism is tied up to the backup mechanism. Windows 7 performs a full backup every week and incremental backup everyday. The previous versions of a file is actually what are the available in the backups. If you 20GB documents and want to maintain at least three(3) year history, you will use at minimum 3 years * 52 weeks * 20GB or about 3TB even if there are few changes in the documents. Its pretty inefficient use of space. Looking up previous versions of a file is very slow (tens of minutes). This is probably related to the previous issue - Windows has to traverse its all of its backups. I am considering using SVN + autocommit/autoupdate tortoisesvn. It will have the following advantages: Backups are easy and will also backup the whole history of each documents. (Just backup the repository). Creating previous versions can be frequent. I think svn commit / update can be done every two minutes or so. Users can sync over the net. However, I can see the following issues: More conflicts than the original setup because both multiple users can now edit the same file even both are online, i.e. can connect to the SVN repo. The users can off course lock the file first before editing, but that would mean they have to adjust. Delay on propagation of file changes. On windows 7 file sharing, changes made by one online user will be instantaneously available to other online users. With the SVN setup, changes will only be propagated when the users execute the svn add/commit/update sequence. Delay will be probably a few minutes. This workflow will no longer work: "Hi, I just edited document X, can you have a quick look?" I would like to ask the opinion of the community for alternative setups, or improvements on the above setups to work out the kinks.

    Read the article

  • Serving protected files using Nginx's X-Accel-Redirect header

    - by andybak
    I'm trying to serve protected files using this directive in my nginx.conf: location /secure/ { internal; alias /home/ldr/webapps/nginx/app/secure/; } I'm passing in paths in the form: "/myfile.doc" and the file's path would be: /home/ldr/webapps/nginx/app/secure/myfile.doc I just get 404's when I access "http: //myserver/secure/myfile.doc" (space inserted after http to stop ServerFault converting it to a link) I've tried taking the trailing / off the location directive and that makes no difference. Two questions: How do I fix it! How can I debug problems like this myself? How can I get Nginx to report which path it's looking for? error.log shows nothing and access.log just tells me which url is being requested - this is the bit I already know! It's no fun trying things randomly without any feedback. Here's my entire nginx.conf: daemon off; worker_processes 2; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; server { listen 21534; server_name my.server.com; client_max_body_size 5m; location /media/ { alias /home/ldr/webapps/nginx/app/media/; } location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; fastcgi_pass unix:/home/ldr/webapps/nginx/app/myproject/django.sock; fastcgi_pass_header Authorization; fastcgi_hide_header X-Accel-Redirect; fastcgi_hide_header X-Sendfile; fastcgi_intercept_errors off; include fastcgi_params; } location /secure { internal; alias /home/ldr/webapps/nginx/app/secure/; } } } EDIT: I'm trying some of the suggestions here So I've tried: location /secure/ { internal; alias /home/ldr/webapps/nginx/app/; } both with and without the trailing slash on location. I've also tried moving this block before the "location /" directive. The page I linked to has ^~ after 'location' giving: location ^~ /secure/ { ...etc... Not sure what that signifies but it didn't work either!

    Read the article

  • How to check there are no html files in current directory?

    - by kev
    I have a script which will download html files into current directory. Then it'll generate a report based on these html files. At last, it'll delete all these html files. So, when I run this script, I want to make sure there is no html files in current dir. This is what I got: if ls *.html >/dev/null 2>&1; then echo 'clear HTML files first' exit fi Is there any easy way to check?

    Read the article

  • SCons does not clean all files

    - by meowsqueak
    I have a file system containing directories of "builds", each of which contains a file called "build-info.xml". However some of the builds happened before the build script generated "build-info.xml" so in that case I have a somewhat non-trivial SCons SConstruct that is used to generate a skeleton build-info.xml so that it can be used as a dependency for further rules. I.e.: for each directory: if build-info.xml already exists, do nothing. More importantly, do not remove it on a 'scons --clean'. if build-info.xml does not exist, generate a skeleton one instead - build-info.xml has no dependencies on any other files - the skeleton is essentially minimal defaults. during a --clean, remove build-info.xml if it was generated, otherwise leave it be. My SConstruct looks something like this: def generate_actions_BuildInfoXML(source, target, env, for_signature): cmd = "python '%s/bin/create-build-info-xml.py' --version $VERSION --path . --output ${TARGET.file}" % (Dir('#').abspath,) return cmd bld = Builder(generator = generate_actions_BuildInfoXML, chdir = 1) env.Append(BUILDERS = { "BuildInfoXML" : bld }) ... # VERSION = some arbitrary string, not important here # path = filesystem path, set elsewhere build_info_xml = "%s/build-info.xml" % (path,) if not os.path.exists(build_info_xml): env.BuildInfoXML(build_info_xml, None, VERSION = build) My problem is that 'scons --clean' does not remove the generated build-info.xml files. I played around with env.Clean(t, build_info_xml) within the 'if' but I was unable to get this to work - mainly because I could not work out what to assign to 't' - I want a generated build-info.xml to be cleaned unconditionally, rather than based on the cleaning of another target, and I wasn't able to get this to work. If I tried a simple env.Clean(None, "build_info_xml") after but outside the 'if' I found that SCons would clean every single build-info.xml file including those that weren't generated. Not good either. What I'd like to know is how SCons goes about determining which files should be cleaned and which should not. Is there something funny about the way I've used a generator function that prevents SCons from recording this target as a Clean candidate?

    Read the article

  • WiX 3: Using heat.exe to add bulk files to a new WiX project: HEAT5150

    - by Karen Kwong
    If this is a repeat question, please direct me to the existing solution. I wasn't able to find a matching query. We currently use InstallShield. I'm attempting to covert a project with 407 files to a WiX3 installation package. I tried using heat.exe to do some of the automation but I get the following warning for almost every file: c: heat dir "c:\projectDir\projectA" -gg -ke -template:Product -out "c:\install\projectA\heatOutput" heat.exe: warning HEAT5150 : Could not harvest data from a file that was expected to be a SelfReg DLL: c:\projectDir\projectA\plugin1.dll. If this file does not support SelfReg you can ignore this warning. Otherwise, this error detail may be helpful to diagnose the failure: Unable to load file: c:\projectDir\projectA\plugin1.dll, error: 126. Q: Is it normal for this warning to be reported for every file? If there's a current "How To create/convert to your first WiX install project with many files" tutorial, please point me to it. The key requirement is "with many files". Thank-you -Karen Kwong- PS. I know that WiX is designed for incremental install project creation but it would be nice to know if there's an automated way to convert existing install projects.

    Read the article

  • Using .NET's HttpWebRequest to download a multitude of files in a row

    - by Cornelius
    I have an application that needs to download several files in a row in succession (sometimes a few thousand). However, what ends up happening when several files need to be downloaded is I get an exception with an inner exception of type SocketException and the error code 10048 (WSAEADDRINUSE). I did some digging and basically it's because the server has run out of sockets (and they are all waiting for 240s or so before they become available again) - not coincidentally it starts happening around the 1024 file range. I would expect that the HttpWebRequest/ServicePointManager would be reusing my connection, but apparently it is not (and the files are https, so that may be part of it). I never saw this problem in the C++ code that this was ported from (but that doesn't mean it didn't ever happen - I'd be surprised if it was, though). I am properly closing the WebRequest object and the HttpWebRequest object has KeepAlive set to true by default. Next my intent is to fiddle around with ServicePointManager.SetTcpKeepAlive(). However, I can't see how more people haven't run into this problem. Has anyone else run into the problem, and if so, what did you do to get around it? Currently I have a retry scheme that detects this error and waits it out, but that doesn't seem like the right thing to do. Here's some basic code to verify what I'm doing (just in case I'm missing closing something): WebRequest webRequest = WebRequest.Create(uri); webRequest.Method = "GET"; webRequest.Credentials = new NetworkCredential(username, password); WebResponse webResponse = webRequest.GetResponse(); try { using(Stream stream = webResponse.GetResponseStream()) { // read the stream } } finally { webResponse.Close() }

    Read the article

  • Writing files in App_Data causes tempdata to be null

    - by RAMX
    I have a small asp.net MVC 1 web app that can store files and create directories in the App_Data directory. When the write operation succeeds, I add a message to the tempdata and do a redirectToRoute. The problem is that the tempdata is null when the action is executed. If i write the files in a directory outside of the web applications root directory, the tempdata is not null and everything works correctly. Any ideas why writing in the app_data seems to clear the tempdata ? edit: if DRS.Logic.Repository.Manager.CreateFile(path, hpf, comment) writes in the App_Data, TempData will be null in the action being redirected to. if it is a directory out of the web app root it is fine. No exceptions are being thrown. [AcceptVerbs(HttpVerbs.Post)] public ActionResult Create(int id, string path, FormCollection form) { ViewData["path"] = path; ViewData["id"] = id; HttpPostedFileBase hpf; string comment = form["FileComment"]; hpf = Request.Files["File"] as HttpPostedFileBase; if (hpf.ContentLength != 0) { DRS.Logic.Repository.Manager.CreateFile(path, hpf, comment); TempData["notification"] = "file was created"; return RedirectToRoute(new { controller = "File", action ="ViewDetails", id = id, path = path + Path.GetFileName(hpf.FileName) }); } else { TempData["notification"] = "No file were selected."; return View(); } }

    Read the article

  • Cron Job on Ubuntu Hardy Executing But Not Deleting Files As Expected

    - by Patrick McKenzie
    I have a bit of a pickle here and wonder if anyone can give me some pointers: I have a cron job which executes for a particular user daily and is supposed to sweep files in a particular directory. Technically, it is two jobs. I've turned on cron.log to verify they're actually executing, and they are: May 24 11:03:01 AppNameGoesHere /USR/SBIN/CRON[11257]: (mongrel_AppNameGoesHere) CMD (rm -rf /var/www/apps/AppNameGoesHere/current/public/ {popular,index,purchasing,purchasing-alternate,support,about-us,guarantee,screenshots}.htm{,l}) May 24 11:04:01 AppNameGoesHere /USR/SBIN/CRON[11260]: (mongrel_AppNameGoesHere) CMD (rm -rf /var/www/apps/AppNameGoesHere/current/public/ {stats,popular,bcf,articles,expenses}) I have removed the actual usernames and formatted it so that it is less ugly on StackOverflow. Now, my question: Despite the fact that I can see these deletions executing and apparently succeeding in the log, if I go to the specified directory, the files are still there. I initially suspected permission hijinx were going on, but I've verified that I can delete the files manually by su-ing into the mongrel_AppNameGoesHere user and issuing individual rm commands or by copy/pasting the cron job to the command line. Anything that I don't manually zap stays unzapped despite days of that cron job executing successfully. Any suggestions on to what might be happening? I was previously using Dapper Drake with these cron jobs in the /etc/crontab file directly, and when I upgraded to Hardy I moved them to user-specific crontabs (via sudo crontab -e - u mongrel_AppNameGoesHere), which was the point where they appear to have stopped working.)

    Read the article

  • How to exclude R*.class files from a proguard build

    - by Jeremy Bell
    I am one step away from making the method described here: http://stackoverflow.com/questions/2761443/targeting-android-with-scala-2-8-trunk-builds work with a single project (vs one project for scala and one for android). I've come across a problem. Using this input file (arguments to) proguard: -injars bin;lib/scala-library.jar(!META-INF/MANIFEST.MF,!library.properties) -outjar lib/scandroid.jar -libraryjars lib/android.jar -dontwarn -dontoptimize -dontobfuscate -dontskipnonpubliclibraryclasses -dontskipnonpubliclibraryclassmembers -keepattributes Exceptions,InnerClasses,Signature,Deprecated, SourceFile,LineNumberTable,*Annotation*,EnclosingMethod -keep public class org.scala.jeb.** { public protected *; } -keep public class org.xml.sax.EntityResolver { public protected *; } Proguard successfully builds scandroid.jar, however it appears to have included the generated R classes that the android resource builder generates and compiles. In this case, they are located in bin/org/jeb/R*.class. This is not what I want. The android dalvik converter cannot build because it thinks there is a duplicate of the R class (it's in scandroid and also the R*.class files). How can I modify the above proguard arguments to exclude the R*.class files from the scandroid.jar so the dalvik converter is happy? Edit: I should note that I tried adding ;bin/org/jeb/R.class;etc... to the -libraryjars argument, and that only seemed to cause it to complain about duplicate classes, and in addition proguard decided to exclude my scala class files too.

    Read the article

< Previous Page | 210 211 212 213 214 215 216 217 218 219 220 221  | Next Page >