Search Results

Search found 22993 results on 920 pages for 'global load balancing'.

Page 557/920 | < Previous Page | 553 554 555 556 557 558 559 560 561 562 563 564  | Next Page >

  • php-fpm start error

    - by Sujay
    I am using php-fpm. I recently recompiled php for including imap functions. But on php-fpm start it gives the following error: Starting php_fpm Error in argument 1, char 1: no argument for option - Usage: php-cgi [-q] [-h] [-s] [-v] [-i] [-f ] php-cgi [args...] -a Run interactively -C Do not chdir to the script's directory -c | Look for php.ini file in this directory -n No php.ini file will be used -d foo[=bar] Define INI entry foo with value 'bar' -e Generate extended information for debugger/profiler -f Parse . Implies `-q' -h This help -i PHP information -l Syntax check only (lint) -m Show compiled in modules -q Quiet-mode. Suppress HTTP Header output. -s Display colour syntax highlighted source. -v Version number -w Display source with stripped comments and whitespace. -z Load Zend extension ................................... failed What could be the problem? Is it in php-fpm.conf or php.ini.

    Read the article

  • Design pattern for an automated mechanical test bench

    - by JJS
    Background I have a test fixture with a number of communication/data acquisition devices on it that is used as an end of line test for a product. Because of all the various sensors used in the bench and the need to run the test procedure in near real-time, I'm having a hard time structuring the program to be more friendly to modify later on. For example, a National Instruments USB data acquisition device is used to control an analog output (load) and monitor an analog input (current), a digital scale with a serial data interface measures position, an air pressure gauge with a different serial data interface, and the product is interfaced through a proprietary DLL that handles its own serial communication. The hard part The "real-time" aspect of the program is my biggest tripping point. For example, I need to time how long the product needs to go from position 0 to position 10,000 to the tenth of a second. While it's traveling, I need to ramp up an output of the NI DAQ when it reaches position 6,000 and ramp it down when it reaches position 8,000. This sort of control looks easy from browsing NI's LabVIEW docs but I'm stuck with C# for now. All external communication is done by polling which makes for lots of annoying loops. I've slapped together a loose Producer Consumer model where the Producer thread loops through reading the sensors and sets the outputs. The Consumer thread executes functions containing timed loops that poll the Producer for current data and execute movement commands as required. The UI thread polls both threads for updating some gauges indicating current test progress. Unsure where to start Is there a more appropriate pattern for this type of application? Are there any good resources for writing control loops in software (non-LabVIEW) that interface with external sensors and whatnot?

    Read the article

  • Exchange can't send emails with attachments

    - by Jack
    No one in our organization can send emails with attachments. Emails without attachments go through fine, but if an attachment is included, an error appears in the Server Failures folder under Sync Issues. The error is "The following message had an error and synchronization of it was skipped (0xc0090081)". We are using Symantec Mail Security, which we shut down to try to troubleshoot the problem, and now that fails to load. Any ideas as to what to check? I'm sorry I don't have more complete information, but I'm helping someone try to figure this out. I'm not the admin myself. Thanks.

    Read the article

  • Doubts about several best practices for rest api + service layer

    - by TheBeefMightBeTough
    I'm going to be starting a project soon that exposes a restful api for business intelligence. It may not be limited to a restful api, so I plan to delegate requests to a service layer that then coordinates multiple domain objects (each of which have business logic local to the object). The api will likely have many calls as it is a long-term project. While thinking about the design, I recalled a few best practices. 1) Use command objects at the controller layer (I'm using Spring MVC). 2) Use DTOs at the service layer. 3) Validate in both the controller and service layer, though for different reasons. I have my doubts about these recommendations. 1) Using command objects adds a lot of extra single-purpose classes (potentially one per request). What exactly is the benefit? Annotation based validation can be done using this approach, sure. What if I have two requests that take the same parameters, but have different validation requirements? I would have to have two different classes with exactly the same members but different annotations? Bleh. 2) I have heard that using DTOs is preferable to parameters because it makes for more maintainable code down the road (say, e.g., requirements change and the service parameters need to be altered). I don't quite understand this. Shouldn't an api be more-or-less set in stone? I would understand that in the early phases of a project (or, especially, an entire company) the domain itself will not be well understood, and thus core domain objects may change along with the apis that manipulate these objects. At this point however the number of api methods should be small and their dependents few, so changes to the methods could easily be tolerated from a maintainability standpoint. In a large api with many methods and a substantial domain model, I would think having a DTO for potentially each domain object would become unwieldy. Am I misunderstanding something here? 3) I see validation in the controller and service layer as redundant in most cases. Why would I validate that parameters are not null and are in general well formed in the controller if the service is going to do exactly the same (and more). Couldn't I just do all the validation in the service and throw a runtime exception with a list of bad parameters then catch that in the controller to make the error messages more presentable? Better yet, couldn't I just make the error messages user-friendly in the service and let the exception trickle up to a global handler (ControllerAdvice in spring, for example)? Is there something wrong with either of these approaches? (I do see a use case for controller validation if the input does not map one-to-one with the service input, but since the controllers are for a rest api and not forms, the api parameters will probably map directly to service parameters.) I do also have a question about unchecked vs checked exceptions. Namely, I'm not really sure why I'd ever want to use a checked exception. Every time I have seen them used they just get wrapped into general exceptions (DomainException, SystemException, ApplicationException, w/e) to reduce the signature length of methods, or devs catch Exception rather than dealing with the App1Exception, App2Exception, Sys1Exception, Sys2Exception. I don't see how either of these practices is very useful. Why not just use unchecked exceptions always and catch the ones you actually do care about? You could just document what unchecked exceptions the method throws.

    Read the article

  • Ubuntu installer does not show drives

    - by Tanweer Rashid
    I am trying to install Ubuntu 12.04 LTS on my Inspiron laptio, but the installer does not show any drives. My system has a 1TB SATA drive and a 32GB SSD. As far as I can figure, the boot files are kept on the SSD for fast startup (for Windows). During Win7 installation, I had to manually load drivers for RAID controller to see all available drives. Running fdisk -l from the live CD shows the following: ubuntu@ubuntu:~$ sudo fdisk -l Disk /dev/sda: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x234b4782 Device Boot Start End Blocks Id System /dev/sda1 63 80324 40131 de Dell Utility /dev/sda2 * 81920 41627647 20772864 7 HPFS/NTFS/exFAT /dev/sda3 41627648 357019647 157696000 7 HPFS/NTFS/exFAT /dev/sda4 357019648 1953517567 798248960 f W95 Ext'd (LBA) /dev/sda5 672415744 1312966655 320275456 7 HPFS/NTFS/exFAT /dev/sda6 1312968704 1953517567 320274432 7 HPFS/NTFS/exFAT Disk /dev/sdb: 32.0 GB, 32017047552 bytes 255 heads, 63 sectors/track, 3892 cylinders, total 62533296 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x234b474b Device Boot Start End Blocks Id System /dev/sdb1 2048 16775167 8386560 84 OS/2 hidden C: drive ubuntu@ubuntu:~$ In the Ubuntu installer, I can only choose /dev/sdb for "Device for boot loader installation", and sdb doesn't show any drives. I cannot select /dev/sda. Any ideas anyone? Thanks.

    Read the article

  • AWS ELB as backend for Varnish Accelerator

    - by addisonj
    I am working on a large deployment on AWS that has high uptime requirements and variable loads throughout the day. Obviously, this is the perfect use case for ELB (Elastic Load Balancer) and autoscaling. However, we also rely on varnish for caching of API calls. My initial instinct was to structure the stack so that varnish uses ELB as a backend which in turn hits an appGroup. Varnish -> ELB -> AppServers However, according to a few sources that isn't possible as ELB constantly changes the IP address of its DNS hostname, which varnish caches on start, meaning changes to the IP won't be picked up by varnish. Reading around however, it looks like people are doing this so I am wondering what workarounds exist? Perhaps a script to reload the vcl periodically? In the case of where this is really just not a good idea, any idea of other solutions?

    Read the article

  • Nginx request forking

    - by Adam
    Hi, I'm wondering if nginx can "fork" a request. Let's imagine config: upstream backend { server localhost:8080; ... more servers here } server { location /myloc { FORK-REQUEST http://my-other-url:3135/something proxy_pass http://backend; } } I would like nginx to send a copy of request to the url specified by FORK-REQUEST and after that to load balance it with backend servers and return the response to the client. As I don't need the response from FORK-REQUEST it would be best if this request was async so normal prcessing doesn't have to wait. Is a scenario like this possible?

    Read the article

  • Why is the latency on one LVM volume consistently higher?

    - by David Schmitt
    I've got a server with LVM over RAID1. One of the volumes has a consistently higher IO latency (as measured by the diskstats_latency munin plugin) than the other volumes from the same group. As you can see, the dark orange /root volume has consistently high IO latency. Actually ten times the average latency of the physical devices. It also has the highest Min and Max values. My main concern are not the peaks, which occur under high load, but the constant load on (semi-)idle. The server is running Debian Squeeze with the VServer kernel and has four VServer containers and one KVM guest. I'm looking for ways to fix - or at least understand - this situation. Here're some parts of the system configuration: root@kvmhost2:~# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/system--host-root 19G 3.8G 14G 22% / tmpfs 16G 0 16G 0% /lib/init/rw udev 16G 224K 16G 1% /dev tmpfs 16G 0 16G 0% /dev/shm /dev/md0 942M 37M 858M 5% /boot /dev/mapper/system--host-isos 28G 19G 8.1G 70% /srv/isos /dev/mapper/system--host-vs_a 30G 23G 6.0G 79% /var/lib/vservers/a /dev/mapper/system--host-vs_b 5.0G 594M 4.1G 13% /var/lib/vservers/b /dev/mapper/system--host-vs_c 5.0G 555M 4.2G 12% /var/lib/vservers/c /dev/loop0 4.4G 4.4G 0 100% /media/debian-6.0.0-amd64-DVD-1 /dev/loop1 4.4G 4.4G 0 100% /media/debian-6.0.0-i386-DVD-1 /dev/mapper/system--host-vs_d 74G 55G 16G 78% /var/lib/vservers/d root@kvmhost2:~# cat /proc/mounts rootfs / rootfs rw 0 0 none /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0 none /proc proc rw,nosuid,nodev,noexec,relatime 0 0 none /dev devtmpfs rw,relatime,size=16500836k,nr_inodes=4125209,mode=755 0 0 none /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0 /dev/mapper/system--host-root / ext3 rw,relatime,errors=remount-ro,data=ordered 0 0 tmpfs /lib/init/rw tmpfs rw,nosuid,relatime,mode=755 0 0 tmpfs /dev/shm tmpfs rw,nosuid,nodev,relatime 0 0 fusectl /sys/fs/fuse/connections fusectl rw,relatime 0 0 /dev/md0 /boot ext3 rw,sync,relatime,errors=continue,data=ordered 0 0 /dev/mapper/system--host-isos /srv/isos ext3 rw,relatime,errors=continue,data=ordered 0 0 /dev/mapper/system--host-vs_a /var/lib/vservers/a ext3 rw,relatime,errors=continue,data=ordered 0 0 /dev/mapper/system--host-vs_b /var/lib/vservers/b ext3 rw,relatime,errors=continue,data=ordered 0 0 /dev/mapper/system--host-vs_c /var/lib/vservers/c ext3 rw,relatime,errors=continue,data=ordered 0 0 /dev/loop0 /media/debian-6.0.0-amd64-DVD-1 iso9660 ro,relatime 0 0 /dev/loop1 /media/debian-6.0.0-i386-DVD-1 iso9660 ro,relatime 0 0 /dev/mapper/system--host-vs_d /var/lib/vservers/d ext3 rw,relatime,errors=continue,data=ordered 0 0 root@kvmhost2:~# cat /proc/mdstat Personalities : [raid1] md1 : active raid1 sda2[0] sdb2[1] 975779968 blocks [2/2] [UU] md0 : active raid1 sda1[0] sdb1[1] 979840 blocks [2/2] [UU] unused devices: <none> root@kvmhost2:~# iostat -x Linux 2.6.32-5-vserver-amd64 (kvmhost2) 06/28/2012 _x86_64_ (8 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 3.09 0.14 2.92 1.51 0.00 92.35 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda 23.25 161.12 7.46 37.90 855.27 1596.62 54.05 0.13 2.80 1.76 8.00 sdb 22.82 161.36 7.36 37.66 850.29 1596.62 54.35 0.54 12.01 1.80 8.09 md0 0.00 0.00 0.00 0.00 0.14 0.02 38.44 0.00 0.00 0.00 0.00 md1 0.00 0.00 53.55 198.16 768.01 1585.25 9.35 0.00 0.00 0.00 0.00 dm-0 0.00 0.00 0.48 20.21 16.70 161.71 8.62 0.26 12.72 0.77 1.60 dm-1 0.00 0.00 3.62 10.03 28.94 80.21 8.00 0.19 13.68 1.59 2.17 dm-2 0.00 0.00 0.00 0.00 0.00 0.00 9.17 0.00 9.64 6.42 0.00 dm-3 0.00 0.00 6.73 0.41 53.87 3.28 8.00 0.02 3.44 0.12 0.09 dm-4 0.00 0.00 17.45 18.18 139.57 145.47 8.00 0.42 11.81 0.76 2.69 dm-5 0.00 0.00 2.50 46.38 120.50 371.07 10.06 0.69 14.20 0.46 2.26 dm-6 0.00 0.00 0.02 0.10 0.67 0.81 12.53 0.01 75.53 18.58 0.22 dm-7 0.00 0.00 0.00 0.00 0.00 0.00 7.99 0.00 11.24 9.45 0.00 dm-8 0.00 0.00 22.69 102.76 407.25 822.09 9.80 0.97 7.71 0.39 4.95 dm-9 0.00 0.00 0.06 0.08 0.50 0.62 8.00 0.07 481.23 11.72 0.16 root@kvmhost2:~# ls -l /dev/mapper/ total 0 crw------- 1 root root 10, 59 May 11 11:19 control lrwxrwxrwx 1 root root 7 Jun 5 15:08 system--host-kvm1 -> ../dm-4 lrwxrwxrwx 1 root root 7 Jun 5 15:08 system--host-kvm2 -> ../dm-3 lrwxrwxrwx 1 root root 7 Jun 5 15:06 system--host-isos -> ../dm-2 lrwxrwxrwx 1 root root 7 May 11 11:19 system--host-root -> ../dm-0 lrwxrwxrwx 1 root root 7 Jun 5 15:06 system--host-swap -> ../dm-9 lrwxrwxrwx 1 root root 7 Jun 5 15:06 system--host-vs_d -> ../dm-8 lrwxrwxrwx 1 root root 7 Jun 5 15:06 system--host-vs_b -> ../dm-6 lrwxrwxrwx 1 root root 7 Jun 5 15:06 system--host-vs_c -> ../dm-7 lrwxrwxrwx 1 root root 7 Jun 5 15:06 system--host-vs_a -> ../dm-5 lrwxrwxrwx 1 root root 7 Jun 5 15:08 system--host-kvm3 -> ../dm-1 root@kvmhost2:~#

    Read the article

  • ERP/CRM Systems. Desktop Based ? Web based?

    - by Parhs
    Hello guys... I have seen 2-3 ERPs in action. I am wondering what is better. Desktop based application or webbased displayed on a browser. My first expirience was with a web based ERP when i was 14 years old.. It was web based and terribly slow... For most simple task you had to do lots of clicks... no keyboard support ..... Pages took ages to load. Last year i worked for migrating to a newer computer some old terminal based cobol application. The computer that worked till today and still has no problem was from 1993. The user interface ofcourse was textbased.. The speed that guys placed orders was amazing! just typing the name of the customer , then 5-10 keys to add a product to order.... Comparing to this ERP the page for placing orders Link (click sales orders) seems terribly slow to add a product... No keyboard shortcut works to save what you added and generally i believe you need 4 times more time to place an order compared to the text interface... Having to use both mouse and keyboard for this task is BAD and sadistic... So how can tek heck these people ever use a system like that ??? So in the long run desktop application seems the only way... Ofcourse browsers support shortcuts but the way to overide the defaults that browsers uses isnt cross compatible... That is a hudge problem. Finnaly, if we MUST/forced use cloud in near future what about keyboard shortcuts?? I feel confused... I have seen converters of desktop applications to browser applications but are SLOW as hell... The question is what about user friendliness?What kind of application would you use?

    Read the article

  • Compiling kernal problem

    - by James
    Hi, I have a hp pavilion dm3t with intel HD graphics running ubuntu 10.10 64 bit. I'm trying to compile and install a patched kernel according to this, https://launchpad.net/~kamalmostafa/+archive/linux-kamal-mjgbacklight So I downloaded the tarball from here (linked to from the page above): http://kernel.ubuntu.com/git?p=kamal/ubuntu-maverick.git;a=shortlog;h=refs/heads/mjg-backlight I untar'd it to a directory, entered the directory and did: make defconfig which was successful, so I did: make which seemed to work fine until it gave these errors: ubuntu/ndiswrapper/iw_ndis.c:1966: error: unknown field ‘num_private’ specified in initializer ubuntu/ndiswrapper/iw_ndis.c:1966: warning: initialization makes pointer from integer without a cast ubuntu/ndiswrapper/iw_ndis.c:1967: error: unknown field ‘num_private_args’ specified in initializer ubuntu/ndiswrapper/iw_ndis.c:1967: warning: excess elements in struct initializer ubuntu/ndiswrapper/iw_ndis.c:1967: warning: (near initialization for ‘ndis_handler_def’) ubuntu/ndiswrapper/iw_ndis.c:1970: error: unknown field ‘private’ specified in initializer ubuntu/ndiswrapper/iw_ndis.c:1970: warning: initialization makes integer from pointer without a cast ubuntu/ndiswrapper/iw_ndis.c:1970: error: initializer element is not computable at load time ubuntu/ndiswrapper/iw_ndis.c:1970: error: (near initialization for ‘ndis_handler_def.num_standard’) ubuntu/ndiswrapper/iw_ndis.c:1971: error: unknown field ‘private_args’ specified in initializer ubuntu/ndiswrapper/iw_ndis.c:1971: warning: initialization from incompatible pointer type make[2]: *** [ubuntu/ndiswrapper/iw_ndis.o] Error 1 make[1]: *** [ubuntu/ndiswrapper] Error 2 make: *** [ubuntu] Error 2 How can I compile and install this kernel successfully? I'm new to this and would appreciate any help.

    Read the article

  • Shared to Dedicated or Amazon CloudFront to improve performances and keep secured?

    - by user978548
    I have a Wordpress which currently takes about 1.8s to 2.5s for the home page to completely load in my country. The page weight is about 700Ko (static content included). In order to increase performances, I'm considering two solutions: Switching to a dedicated host. Using amazon s3 cloudfront to serve static contents. My current shared hosting have servers in a neighboring country but not exactly in mine, and both amazon and the dedicated hosting have some, so that's already an advantage. So considering all that, I still have three questions remaining: Currently having a low traffic (100 unique visitors/days, but growing) will it make a huge difference between my shared hosting and a dedicated server ? Knowing that I already use a cookie-less domain to deliver static contents (but using a redirection to the same server), would using amazon s3 make a real difference ? Talking about the cons of dedicated vs amazon s3, if I choose for the dedicated server something like Ubuntu server and do daily package updates and have only port 80 open, would it be sufficient in terms of security (in comparison with my current shared hosting which manage everything for me) ?

    Read the article

  • Nginx & Lua: Hacks, optimizations & observations

    - by Quintin Par
    Following this post on using Lua to increase nginx’s flexibility and in reducing load on the web stack I am curious to know how people are using Lua to enhance nginx’s capability. Are there any notable hacks, optimizations & observations using Lua? Hacks that people have used to discover capability with Nginx that would otherwise be complicated/impossible with a webserver or reverse proxy? Edit: Links: http://thechangelog.com/post/3249294699/super-nginx-killer-build-of-nginx-build-for-luajit-plus http://skillsmatter.com/podcast/home/scripting-nginx-with-lua/te-4729 http://devblog.mixlr.com/2012/06/26/how-we-use-nginx-lua-and-redis-to-beta-ify-mixlr/

    Read the article

  • Accessing localhost via IIS 7.5 on Windows 7 very slow

    - by Ian Devlin
    (I've asked this over on stackoverflow already, but thought I'd ask here as well) I'm currently running an ASP.NET application on IIS 7.5 on Windows 7. When I access this application on Internet Explorer (either 6, 7 or 8) it is incredible slow and often fails to load at all. There are messages at the bottom saying: Waiting for http://localhost/....... or sometimes waiting for about:blank (I've read that this can be a virus, but I've run all the usual checks and it's not). constantly, but it returns with the usual: "Internet Explorer cannot display the webpage" I've also tried this by using 127.0.0.1 and the machine name, with the same results. I've tried the same application on the latest Firefox, Safari, Chrome and Opera and they all work fine. I've also installed the same application on a Windows Server 2003 machine, and it all works fine via Internet Explorer. I've also turned off the IPv6 setting on the LAN connection. Soes anyone have any ideas why this doesn't work with Internet Explorer and yet does with other browsers?

    Read the article

  • Ubuntu 11.04 and OpenLDAP - where is the config?

    - by Tom SKelley
    I've been asked to setup a multimaster LDAP environment on Ubuntu 11.04 - instead of a single master server. I cloned the master server and recreated it into two VMs. I am trying to follow the instructions on the OpenLDAP documentation here: http://www.openldap.org/doc/admin24/replication.html and it talks about modifying the cn=config tree within LDAP. The subdirectory tree appears to be there at: /etc/ldap/slapd.d/ and a slapcat -b cn=config drops out a load of config information. When I try to connect using a browser and the admin bind credentials: ldapsearch -D '<adminDN>' -w <password> -b 'cn=config' I get: # extended LDIF # # LDAPv3 # base <> (default) with scope subtree # filter: (objectclass=*) # requesting: ALL # # search result search: 2 result: 32 No such object I don't see the config context when I connect via an LDAP browser either. I'm sure I'm missing something, but I can't see what it is!

    Read the article

  • Sensitive data in init scripts

    - by Steve Jorgensen
    I'm adapting some examples I've found by Googling to build an init script to run a VirtualBox OSE virtual machine as a daemon. I would like to specify a password for VNC access to the VM, and this must be given as an argument to the VBoxHeadless command. Conventionally, init scripts are readable by standard users, and this seems like a useful convention, but I also don't want the VNC password for this VM to be stored in easily accessible plain text. What's the most appropriate/conventional way to handle this kind of situation? Maybe put a root-readable supporting data file someplace, and have the init script load the value from there?

    Read the article

  • How to install Mac OS X Snow Leopard server on any virtual machine with DMG?

    - by Eonil
    I'm trying to install Mac OS X Snow Leopard Server on VirtualBox. I know this is duplicated question, however another questions have no fine answer. I'm trying to install on VirtualBox on iMac. So this is pretty legal. But the problem is I want to install with DMG image. Because installing DVD drive is too slow, and I have to install Mac OS X many times. And taking DVD disc from box is annoying too. But VirtualBox fails installing. It couldn't load kernel. It installs well with DVDs. Is there any way to do this? I'm considering using other VM solutions like Parallels or VMWare if they can support install from DMG images well. If you know about them certainly, please let me know.

    Read the article

  • prevent search engines indexing depending on domain

    - by Javier
    We have a dedicated server with a hosting company with a couple of dozens of webs in it. It happens that the nameservers (EG: ns1.domain.com, ns2.domain.com) ip's are coincident with some client webs, let's say webclient1.com and webclient2.com Problem is that for a certain searches in google, some results are showing up like ns1.domain.com/result instead of webclient1.com/result which is pretty wrong and annoying for our clients. Actually if you type in the browser ns1.domain.com or ns2.domain.com it will load some pageclients instead. Is there any way to prevent google to track those results only in case the robots are coming to check ns domains? It may be not correct to ask this as well, but why is it happening? is it a result of a bad server configuration? I'm pretty new on these matters, so thank you in advance for any help!

    Read the article

  • Apache log file problem

    - by Luke
    I've recently set up an Apache 2 web server and I noticed a quite a few lines in the error and access log that start with the follow sequence (but longer). Does anyone know where this comes from? ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@ ....... My set up is an Apache 2 load balancer with mod_balancer enabled and two Apache 2 web servers. All three servers write to the same log files on a share located on a NFS. My first guess is that my problem has to do with it since it's the only difference that comes to mind from other set ups I've used in the past but I'm not sure.

    Read the article

  • Why can't I create an Alias Resource Record Set for an EC2 instance

    - by praterade
    I have been working with AWS for over a year, setting up EC2 instances, domains, ELBs, etc. When I want to assign a subdomain to an EC2 instance, I have to create an elastic IP (that I pay for), then assign a CNAME record to that elastic IP. When I want to assign a subdomain to an ELB (load balancer) instance, I just create an alias resource record set to the ELB. I've read over the docs and don't understand why AWS doesn't support aliasing to instances. Am I missing a key concept here? Wouldn't it be simpler to just alias EC2 instances and skip the whole elastic IP bit?

    Read the article

  • Scaling up an apache server

    - by pehrs
    I have an Ubuntu server running apache2 which i expect to be hit by around 500-1000 (concurent) users for a limited amount of time. The server serves a mixture of custom (rather light) php pages connected to a postgresql db (around 20 Mb in size) and static content. The hardware is stable and pretty beefy: Intel Xeon E5420 @ 2.5 GHz 12 GB RAM During previous rushes on this server I have increased ServerLimit, the MaxClients for the MPM modules and decreased Timeout and KeepAliveTimeout. It has worked, but been sluggish and I have a feeling more can be done. How would you suggest configuring the Apache server to handle this kind of load?

    Read the article

  • Permission forbidden on localhost with apache2

    - by N Alex
    Here is what I am trying to do. I tried to add another folder to apache and I get the following error when trying to acces testing/index.html. The idea is that I would like to have for every customer a folder like /home/neagoe/Work/InterWebs/Projects/[PROJECT NAME]/CustomerProjects/website/dist. Forbidden You don't have permission to access /index.html on this server. Apache/2.2.22 (Ubuntu) Server at testing Port 80 Here are the steps that I followed: Step1: sudo chmod a+x /home/neagoe/Work/InterWebs/Projects/testing/CustomerProjects/website/dist Step2: sudo chown -R www-data:www-data /home/neagoe/Work/InterWebs/Projects/testing/CustomerProjects/website/dist sudo chmod -R 775 /home/neagoe/Work/InterWebs/Projects/testing/CustomerProjects/website/dist Step3: sudo adduser $USER www-data Step4: sudo a2enmod userdir Step5: sudo cp /etc/apache/sites-available/default /etc/apache/sites-available/testing I edited the file /etc/apache/sites-available/testing so it looks like this: <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName testing DocumentRoot /home/neagoe/Work/InterWebs/Projects/testing/CustomerProjects/website/dist <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /home/neagoe/Work/InterWebs/Projects/testing/CustomerProjects/website/dist/ > Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> Step6: I edited hosts ("/etc/hosts") so it looks like this: 127.0.0.1 localhost 127.0.0.1 testing # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters Step7: sudo a2ensite testing sudo service apache2 restart I searched for about 2 hours on the internet but I can't figure out what went wrong. All the pages that I found following the same steps as described above. I know there are similar questions here on the internet, but the answer is to change permission to the directory which I did on Step2. I am sorry if this is really a duplicate but I could't find the right answer. Thank you! PS. I asked this also on AskUbuntu but didn't get any answers so I'm trying my luck here. Edit: There isn't much on the error log or the access log. On the access.log: ::1 - - [10/Aug/2013:11:23:28 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" ::1 - - [10/Aug/2013:11:23:29 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" ::1 - - [10/Aug/2013:11:23:31 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" ::1 - - [10/Aug/2013:11:23:32 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" ::1 - - [10/Aug/2013:11:23:33 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" ::1 - - [10/Aug/2013:11:23:34 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" ::1 - - [10/Aug/2013:11:23:35 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" 127.0.0.1 - - [10/Aug/2013:11:23:23 +0300] "POST /wordpress-testing/wp-cron.php?doing_wp_cron=1376123003.7026669979095458984375 HTTP/1.0" 200 705 "-" "WordPress/3.6; http://localhost/wordpress-testing" ::1 - - [10/Aug/2013:11:23:36 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" ::1 - - [10/Aug/2013:11:23:37 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" ::1 - - [10/Aug/2013:11:23:38 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" 127.0.0.1 - - [10/Aug/2013:11:31:32 +0300] "GET /index.html HTTP/1.1" 200 485 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:23.0) Gecko/20100101 Firefox/23.0" And the last line repeats for about 200 rows. On the error.log: 1. This lines repeat from time to time. PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php5/20100525 /msql.so' - /usr/lib/php5/20100525/msql.so: cannot open shared object file: No such file or directory in Unknown on line 0 [Sat Aug 10 13:06:42 2013] [notice] Apache/2.2.22 (Ubuntu) PHP/5.4.9-4ubuntu2.2 configured -- resuming normal operations [Sat Aug 10 13:07:36 2013] [notice] caught SIGTERM, shutting down PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php5/20100525/msql.so' - /usr/lib/php5/20100525/msql.so: cannot open shared object file: No such file or directory in Unknown on line 0 [Sat Aug 10 13:07:37 2013] [notice] Apache/2.2.22 (Ubuntu) PHP/5.4.9-4ubuntu2.2 configured -- resuming normal operations 2. And this is the predominant error. (hundreds of lines) [Sat Aug 10 13:07:40 2013] [error] [client 127.0.0.1] (13)Permission denied: access to /index.html denied

    Read the article

  • CodePlex Daily Summary for Tuesday, December 18, 2012

    CodePlex Daily Summary for Tuesday, December 18, 2012Popular Releasessb0t v.5: sb0t 5 Template for Visual Studio: This is the official sb0t 5 template for Visual Studio 2010 and Visual Studio 2012 for C# programmers. Use this template to create your own sb0t 5 extensions.F# PowerPack with F# Compiler Source Drops: PowerPack for FSharp 3.0 + .NET 4.x + VS2010: This is a release of the old FSharp Power Pack binaries recompiled for F# 3.0, .NET 4.0/4.5 and Silveright 5. NOTE: This is for F# 3.0 & .NET 4.0 or F# 3.0 & Silverlight 5 NOTE: The assemblies are no longer strong named NOTE: The assemblies are not added to the GAC NOTE: In some cases functionality overlaps with F# 3.0, e.g. SI Units of measurecodeSHOW: codeSHOW AppPackage Release 16: This release is a package file built out of Visual Studio that you can side load onto your machine if for some reason you don't have access to the Windows Store. To install it, just unzip and run the .ps1 file using PowerShell (right click | run with PowerShell). Attention: if you want to download the source code for codeSHOW, you're in the wrong place. You need to go to the SOURCE CODE tab.Move Mouse: Move Mouse 2.5.3: FIXED - Issue where it errors on load if the screen saver interval is over 333 minutes.LINUX????????: LINUX????????: LINUX????????cnbeta: cnbeta: cnbetaCSDN ??: CSDN??????: CSDN??????PowerShell Community Extensions: 2.1.1 Production: PowerShell Community Extensions 2.1.1 Release NotesDec 16, 2012 This version of PSCX supports both Windows PowerShell 2.0 and 3.0. Bug fix for HelpUri error with the Get-Help proxy command. See the ReleaseNotes.txt download above for more information.VidCoder: 1.4.11 Beta: Added Hungarian translation, thanks to Brechler Zsolt. Update HandBrake core to SVN 5098. This update should fix crashes on some files. Updated the enqueue split button to fit in better with the active Windows theme. Updated presets to use x264 preset/profile/level.BarbaTunnel: BarbaTunnel 6.0: Check Version History for more information about this release.???: Cnblogs: CNBLOGSSandcastle Help File Builder: SHFB v1.9.6.0 with Visual Studio Package: General InformationIMPORTANT: On some systems, the content of the ZIP file is blocked and the installer may fail to run. Before extracting it, right click on the ZIP file, select Properties, and click on the Unblock button if it is present in the lower right corner of the General tab in the properties dialog. This new release contains bug fixes and feature enhancements. There are some potential breaking changes in this release as some features of the Help File Builder have been moved into...Electricity, Gas and Temperature Monitoring with Netduino Plus: V1.0.1 Netduino Plus Monitoring: This is the first stable release from the Netduino Plus Monitoring program. Bugfixing The code is enhanced at some places in respect to the V0.6.1 version There is a possibility to add multiple S0 meters Website for realtime display of data Website for configuring the Netduino Comments are welcome! Additions will not be made to this version. This is the first and last major Netduino Plus V1 release. The new development will take place with the Netduino Plus V2 development board in mi...CRM 2011 Visual Ribbon Editor: Visual Ribbon Editor (1.3.1116.8): [FIX] Fixed issue not displaying CRM system button images correctly (incorrect path in file VisualRibbonEditor.exe.config)My Expenses Windows Store LOB App Demo: My Expenses Version 1: This is version 1 of the MyExpenses Windows 8 line of business demo app. The app is written in XAML and C#. It calls a WCF service that works with a SQL Server database. The app uses the Callisto toolkit. You can get it at https://github.com/timheuer/callisto. The Expenses.sql file contains the SQL to create the Expenses database. The ExpensesWCFService.zip file contains the WCF service, also written in C#. You should create a WCF service. Create an Entity Framework model and point it to...BlackJumboDog: Ver5.7.4: 2012.12.13 Ver5.7.4 (1)Web???????、???????????????????????????????????????????VFPX: ssClasses A1.0: My initial release. See https://vfpx.codeplex.com/wikipage?title=ssClasses&referringTitle=Home for a brief description of what is inside this releaseLayered Architecture Solution Guidance (LASG): LASG 1.0.0.8 for Visual Studio 2012: PRE-REQUISITES Open GAX (Please install Oct 4, 2012 version) Microsoft® System CLR Types for Microsoft® SQL Server® 2012 Microsoft® SQL Server® 2012 Shared Management Objects Microsoft Enterprise Library 5.0 (for the generated code) Windows Azure SDK (for layered cloud applications) Silverlight 5 SDK (for Silverlight applications) THE RELEASE This release only works on Visual Studio 2012. Known Issue If you choose the Database project, the solution unfolding time will be slow....Fiskalizacija za developere: FiskalizacijaDev 2.0: Prva prava produkcijska verzija - Zakon je tu, ova je verzija uskladena sa trenutno važecom Tehnickom specifikacijom (v1.2. od 04.12.2012.) i spremna je za produkcijsko korištenje. Verzije iza ove ce ovisiti o naknadnim izmjenama Zakona i/ili Tehnicke specifikacije, odnosno, o eventualnim greškama u radu/zahtjevima community-a za novim feature-ima. Novosti u v2.0 su: - That assembly does not allow partially trusted callers (http://fiskalizacija.codeplex.com/workitem/699) - scheme IznosType...Bootstrap Helpers: Version 1: First releaseNew ProjectsAsh Launcher: The ash launcher is a launcher for the ash modpack for minecraftAsynchronous PowerShell Module: psasync is a PowerShell module containing simple helper functions to allow for multi-threaded operations using Runspaces. ClearVss: ClearVss clear any reference of Visual SourceSafe in yours solution. You'll no longer have to delete and modify solution files by hand. It's developed in C#dewitcher Framework: A rly cool Framework, made for use with COSMOS. - Console-class that supports colored, horizontal- and vertical- centered text printing - more =PGTD Pad: Notepad for GTD TechniqueHTML/JavaScript Rendering Web Part: HTML/JavaScript Render Web Part - replaces the SharePoint Content Editor Web Part (CEWP). Permits flying in script, HTML, etc. to a SharePoint Page.Kracken Generator and Architecture Tool for Visual Studio 2012: Welcome to Kracken a suite of tools for creating code from Architecture models. This program is the pet project of Tracy Rooks.Logica101: Aplicación para visualización de procesos algebraicosManaged DismApi Wrapper: This is a managed wrapper for the native Deployment Image Servicing and Management (DISM) API. This allows .NET developers to call into the native API instead Mvc web-ajax: A Javascript library for displaying lists and editing objects N2F Yverdon InfoBoxes: A simple extension (plus some resources) for managing information boxes on a given page.N2F Yverdon Solar Flare Reflector: The solar flare reflector provides minimal base-range protection for your N2F Yverdon installation against solar flare interference.newplay: goodPhnyx: The project is under re-structuring - look backPigeonCms: Cms made with c# (using NET framework 3.5, SqlServer2005 or Express edition) with many Joomla-like features. Poppæa: Very early version of a c# library for cassandra nosql database. For now it adds support for the new CQL3 protocol in cassandra 1.2+. Proximity Tapper: Proximity Tapper is a developer tool for working with NFC on both Windows Phone and Windows, and allows you to build NFC apps in the Windows Phone emulator.SharePoint 2010 SpellCheck: SharePoint 2010 SpellCheck Project will let you enable spelling check functionality in SharePoint 2010 using SpellCheck.asmxShift8Read Community Credit Tool (DotNetNuke Module): A simple DotNetNuke Module I created for submitting content to Community-Credit.comTempistGamer Client: The tempistgamers game client manager. Includes a cross-platform, cross-game UI to allow for player interactivity. While the program itself manages the games.testtom12122012tfs02: fdstesttom12172012tfs02: uioTEviewer: The Transient Event Viewer is an application designed for visualization and analysis of transient events.trucho-JCI: summaryTypeSharp: TypeSharp is a C# to TypeScript code generatorWeiboWPSdk: To make wp applications about sina weibo more easily.XNA Games Core: XNA Core Game Programming library. Uses interfaces and delegates, in tune with the .NET way, with inheritence used for implementation reuse.

    Read the article

  • If I implement a web-service, how do I respond to POST requests with JSON?

    - by Vova Stajilov
    I have to make a rather complex system for my diploma work. Logically it will consist of the following components: Database Web-service Management with web interface Client iOS application that will consume web-service I decided to implement all the first three components under .NET. Firstly I will create a database depending on the information load - this is clear. But then I need a web-service that will return data in JSON format for iOS clients to consume - that's obvious and not that hard to implement. For this I will use WCF technology. Now I have a question, if I implement the web-service, how will I be able to respond to POST requests with JSON? It probably involves WCF JSON or something related? But I also need some web pages as admin part, so will this web-application be able to consume my centralized web-services as well or I should develop it separately? I just want my web service to act like a set of controllers. There is a related question here but this doesn't quite reflect my question.

    Read the article

  • Meet the New Windows Azure

    - by ScottGu
    Today we are releasing a major set of improvements to Windows Azure.  Below is a short-summary of just a few of them: New Admin Portal and Command Line Tools Today’s release comes with a new Windows Azure portal that will enable you to manage all features and services offered on Windows Azure in a seamless, integrated way.  It is very fast and fluid, supports filtering and sorting (making it much easier to use for large deployments), works on all browsers, and offers a lot of great new features – including built-in VM, Web site, Storage, and Cloud Service monitoring support. The new portal is built on top of a REST-based management API within Windows Azure – and everything you can do through the portal can also be programmed directly against this Web API. We are also today releasing command-line tools (which like the portal call the REST Management APIs) to make it even easier to script and automate your administration tasks.  We are offering both a Powershell (for Windows) and Bash (for Mac and Linux) set of tools to download.  Like our SDKs, the code for these tools is hosted on GitHub under an Apache 2 license. Virtual Machines Windows Azure now supports the ability to deploy and run durable VMs in the cloud.  You can easily create these VMs using a new Image Gallery built-into the new Windows Azure Portal, or alternatively upload and run your own custom-built VHD images. Virtual Machines are durable (meaning anything you install within them persists across reboots) and you can use any OS with them.  Our built-in image gallery includes both Windows Server images (including the new Windows Server 2012 RC) as well as Linux images (including Ubuntu, CentOS, and SUSE distributions).  Once you create a VM instance you can easily Terminal Server or SSH into it in order to configure and customize the VM however you want (and optionally capture your own image snapshot of it to use when creating new VM instances).  This provides you with the flexibility to run pretty much any workload within Windows Azure.   The new Windows Azure Portal provides a rich set of management features for Virtual Machines – including the ability to monitor and track resource utilization within them.  Our new Virtual Machine support also enables the ability to easily attach multiple data-disks to VMs (which you can then mount and format as drives).  You can optionally enable geo-replication support on these – which will cause Windows Azure to continuously replicate your storage to a secondary data-center at least 400 miles away from your primary data-center as a backup. We use the same VHD format that is supported with Windows virtualization today (and which we’ve released as an open spec), which enables you to easily migrate existing workloads you might already have virtualized into Windows Azure.  We also make it easy to download VHDs from Windows Azure, which also provides the flexibility to easily migrate cloud-based VM workloads to an on-premise environment.  All you need to do is download the VHD file and boot it up locally, no import/export steps required. Web Sites Windows Azure now supports the ability to quickly and easily deploy ASP.NET, Node.js and PHP web-sites to a highly scalable cloud environment that allows you to start small (and for free) and then scale up as your traffic grows.  You can create a new web site in Azure and have it ready to deploy to in under 10 seconds: The new Windows Azure Portal provides built-in administration support for Web sites – including the ability to monitor and track resource utilization in real-time: You can deploy to web-sites in seconds using FTP, Git, TFS and Web Deploy.  We are also releasing tooling updates today for both Visual Studio and Web Matrix that enable developers to seamlessly deploy ASP.NET applications to this new offering.  The VS and Web Matrix publishing support includes the ability to deploy SQL databases as part of web site deployment – as well as the ability to incrementally update database schema with a later deployment. You can integrate web application publishing with source control by selecting the “Set up TFS publishing” or “Set up Git publishing” links on a web-site’s dashboard: Doing do will enable integration with our new TFS online service (which enables a full TFS workflow – including elastic build and testing support), or create a Git repository that you can reference as a remote and push deployments to.  Once you push a deployment using TFS or Git, the deployments tab will keep track of the deployments you make, and enable you to select an older (or newer) deployment and quickly redeploy your site to that snapshot of the code.  This provides a very powerful DevOps workflow experience.   Windows Azure now allows you to deploy up to 10 web-sites into a free, shared/multi-tenant hosting environment (where a site you deploy will be one of multiple sites running on a shared set of server resources).  This provides an easy way to get started on projects at no cost. You can then optionally upgrade your sites to run in a “reserved mode” that isolates them so that you are the only customer within a virtual machine: And you can elastically scale the amount of resources your sites use – allowing you to increase your reserved instance capacity as your traffic scales: Windows Azure automatically handles load balancing traffic across VM instances, and you get the same, super fast, deployment options (FTP, Git, TFS and Web Deploy) regardless of how many reserved instances you use. With Windows Azure you pay for compute capacity on a per-hour basis – which allows you to scale up and down your resources to match only what you need. Cloud Services and Distributed Caching Windows Azure also supports the ability to build cloud services that support rich multi-tier architectures, automated application management, and scale to extremely large deployments.  Previously we referred to this capability as “hosted services” – with this week’s release we are now referring to this capability as “cloud services”.  We are also enabling a bunch of new features with them. Distributed Cache One of the really cool new features being enabled with cloud services is a new distributed cache capability that enables you to use and setup a low-latency, in-memory distributed cache within your applications.  This cache is isolated for use just by your applications, and does not have any throttling limits. This cache can dynamically grow and shrink elastically (without you have to redeploy your app or make code changes), and supports the full richness of the AppFabric Cache Server API (including regions, high availability, notifications, local cache and more).  In addition to supporting the AppFabric Cache Server API, it also now supports the Memcached protocol – allowing you to point code written against Memcached at it (no code changes required). The new distributed cache can be setup to run in one of two ways: 1) Using a co-located approach.  In this option you allocate a percentage of memory in your existing web and worker roles to be used by the cache, and then the cache joins the memory into one large distributed cache.  Any data put into the cache by one role instance can be accessed by other role instances in your application – regardless of whether the cached data is stored on it or another role.  The big benefit with the “co-located” option is that it is free (you don’t have to pay anything to enable it) and it allows you to use what might have been otherwise unused memory within your application VMs. 2) Alternatively, you can add “cache worker roles” to your cloud service that are used solely for caching.  These will also be joined into one large distributed cache ring that other roles within your application can access.  You can use these roles to cache 10s or 100s of GBs of data in-memory very effectively – and the cache can be elastically increased or decreased at runtime within your application: New SDKs and Tooling Support We have updated all of the Windows Azure SDKs with today’s release to include new features and capabilities.  Our SDKs are now available for multiple languages, and all of the source in them is published under an Apache 2 license and and maintained in GitHub repositories. The .NET SDK for Azure has in particular seen a bunch of great improvements with today’s release, and now includes tooling support for both VS 2010 and the VS 2012 RC. We are also now shipping Windows, Mac and Linux SDK downloads for languages that are offered on all of these systems – allowing developers to develop Windows Azure applications using any development operating system. Much, Much More The above is just a short list of some of the improvements that are shipping in either preview or final form today – there is a LOT more in today’s release.  These include new Virtual Private Networking capabilities, new Service Bus runtime and tooling support, the public preview of the new Azure Media Services, new Data Centers, significantly upgraded network and storage hardware, SQL Reporting Services, new Identity features, support within 40+ new countries and territories, and much, much more. You can learn more about Windows Azure and sign-up to try it for free at http://windowsazure.com.  You can also watch a live keynote I’m giving at 1pm June 7th (later today) where I’ll walk through all of the new features.  We will be opening up the new features I discussed above for public usage a few hours after the keynote concludes.  We are really excited to see the great applications you build with them. Hope this helps, Scott

    Read the article

  • How do I create an ISO image from a directory structure on CentOS?

    - by tom smith
    I'm trying to figure out the exact mkisofs cmd to create the ISO with the following directory and file structure. I've tried different commands, but when I mount the ISO that is created the directory tree has not been reproduced. The initial directory tree is: master.iso:: mount -o loop /apps/vmware/master.iso /mnt/vmtest ls /mnt/vmtest isolinux ks.cfg upgra32 upgra64 upgrade.sh ls /mnt/vmtest/isolinux boot.cat initrd.img isolinux.bin isolinux.cfg vmlinuz I've used different variations of the following mkisofs command without success: mkisofs -o '/foo/test.iso' -b 'isolinux.bin' -c 'boot.cat' -no-emul-boot -boot-load-size 4 -boot-info-table 'isolinux' How do I make an ISO that captures a directory's exact structure?

    Read the article

< Previous Page | 553 554 555 556 557 558 559 560 561 562 563 564  | Next Page >