Search Results

Search found 27428 results on 1098 pages for 'copy local'.

Page 724/1098 | < Previous Page | 720 721 722 723 724 725 726 727 728 729 730 731  | Next Page >

  • Encrypt connection between apache web server and mysql server.

    - by microchasm
    I'm setting up a local webapp. I have a CentOS-5 box that will be the webserver (Apache 2.2). I have another box (RHEL5) that will be used only for MySQL. The data will be encrypted on the webserver via PHP before being sent to the MySQL box and inserted into the db. All web-based connections to the webserver will be encrypted via SSL. From the research I've done, it's not totally clear on whether or not there is a need to encrypt the connection to the db from webserver (NB paranoia level: Orange). If it is not overkill, or even if it is (unless it is a really bad idea for some reason), any advice or pointers on the direction to take to get this done would be appreciated.

    Read the article

  • The application attempted to perform an operation not allowed by the security policy

    - by user16521
    I ran this command on the server that has the share of code that my local IIS site set to (Via UNC to that share): http://support.microsoft.com/kb/320268 Drive:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\caspol.exe -m -ag 1 -url "file:////\\computername\sharename\*" FullTrust -exclusive on (obviously I replaced Drive with C, and the actual computername and sharename with the one I'm sharing out). But when I run the ASP.NET site, I am still getting this runtime exception: Description: The application attempted to perform an operation not allowed by the security policy. To grant this application the required permission please contact your system administrator or change the application's trust level in the configuration file. Exception Details: System.Security.SecurityException: Request for the permission of type 'System.Web.AspNetHostingPermission, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' failed.

    Read the article

  • Wifi - wireless network not working after installing Ubuntu 12.04

    - by Nilesh
    I had installed ubuntu 12.04 in my Dell Vostro 3460. Initially wired as well wireless both are not working. After installing compat-wireless-2012-07-03-pc.tar.bz2 wired network fine but wireless is not working. Previously both were working fine with ubuntu 11.10. Also presently both network are working fine in Win7. lspci: 00:00.0 Host bridge: Intel Corporation Ivy Bridge DRAM Controller (rev 09) 00:01.0 PCI bridge: Intel Corporation Ivy Bridge PCI Express Root Port (rev 09) 00:02.0 VGA compatible controller: Intel Corporation Ivy Bridge Graphics Controller (rev 09) 00:14.0 USB controller: Intel Corporation Panther Point USB xHCI Host Controller (rev 04) 00:16.0 Communication controller: Intel Corporation Panther Point MEI Controller #1 (rev 04) 00:1a.0 USB controller: Intel Corporation Panther Point USB Enhanced Host Controller #2 (rev 04) 00:1b.0 Audio device: Intel Corporation Panther Point High Definition Audio Controller (rev 04) 00:1c.0 PCI bridge: Intel Corporation Panther Point PCI Express Root Port 1 (rev c4) 00:1c.4 PCI bridge: Intel Corporation Panther Point PCI Express Root Port 5 (rev c4) 00:1d.0 USB controller: Intel Corporation Panther Point USB Enhanced Host Controller #1 (rev 04) 00:1f.0 ISA bridge: Intel Corporation Panther Point LPC Controller (rev 04) 00:1f.2 SATA controller: Intel Corporation Panther Point 6 port SATA Controller [AHCI mode] (rev 04) 00:1f.3 SMBus: Intel Corporation Panther Point SMBus Controller (rev 04) 01:00.0 VGA compatible controller: NVIDIA Corporation Device 0de9 (rev a1) 02:00.0 Network controller: Broadcom Corporation Device 4365 (rev 01) 03:00.0 Ethernet controller: Atheros Communications Inc. AR8161 Gigabit Ethernet (rev 10) iwconfig: lo no wireless extensions. eth0 no wireless extensions. ifconfig: eth0 Link encap:Ethernet HWaddr 84:8f:69:d4:35:9b inet addr:10.24.22.72 Bcast:10.24.31.255 Mask:255.255.224.0 inet6 addr: fe80::868f:69ff:fed4:359b/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:15332 errors:0 dropped:0 overruns:0 frame:0 TX packets:10425 errors:0 dropped:0 overruns:0 carrier:1 collisions:0 txqueuelen:1000 RX bytes:9262594 (9.2 MB) TX bytes:1572030 (1.5 MB) Interrupt:16 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:1432 errors:0 dropped:0 overruns:0 frame:0 TX packets:1432 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:145656 (145.6 KB) TX bytes:145656 (145.6 KB)

    Read the article

  • php rsync with exec() not working

    - by mojeime
    Why this: rsync -avz -e ssh /home/userneme/folder [email protected]:/var/www/folder works from cronjob and this: exec("rsync -avz -e ssh /home/userneme/folder [email protected]:/var/www/folder"); doesn't work. I know exec is working because i have a few places in my appp that do convercion from pdf to jpg with ImageMagick (exec). SOLVED exec is working OK it was a permission issue on remote server. "Local" server is shared reseller account and remote server is my first VPS Ubuntu 10.10 LAMP box. If only I had a system administrator since i'm just a software developer forced to do this and i stink at it :) Thank You all!

    Read the article

  • I'd like to switch from 32-bit to 64-bit within same version

    - by Marty Fried
    I have a 32-bit installation of 11.10 on my 64-bit (4 GB) home AMD system. I have recently read up a bit on 64-bit version, and it seems that it would be a marginally better choice now for me. I have read about several methods to help reinstall all the various apps, using either dpkg's get-selections/set-selections and dselect in various ways, or using synaptic's save/get markings. The problem here is that I've read several variations, and I'm not sure which is best. I have enough disk space to do this with a brand new partition, so I'm not too worried about destroying anything, but I don't really want to make it my life's work, hence my appeal for expert tips. Since it's the same version, would it be safe to copy configuration files from the 32-bit system? I'd guess my home directory and /etc might be enough, and would save at least most of the time to reconfigure. But are there difference in configuration files in either of these directories for 32 vs 64 bits that might cause problems? After reinstalling to 64-bit, I can then continue along the 64 bit path for upgrades, but I thought it would be easier to switch the same version, than to try to reinstall apps and upgrade at the same time. Some methods I've seen suggested, among others: A. From Ubuntu forums On your old system (assuming it is still working), start up Synaptic and go: File->Save Markings and choose a file name along with a location (like a USB drive) that you can use when you have installed your new system). You need to check on the bottom: "Save full state, not only changes" This file contains a list of all your currently installed packages, and when you have installed and booted up your new system (and configured your repositories to the best for your location - as we all do, don't we?) then start up Synaptic and go: File-Read Markings and point it at your saved file, and after that has completed then select Apply to kick off the download & installation of all of those packages you had installed previously! B. From the same discussion: According to section 6.4.9 of the Debian Reference Manual, the following will save both the list of packages installed and their debconf configuration: # dpkg --get-selections "*" >myselections # or use \* # debconf-get-selections > debconfsel.txt and the following will reinstall and reconfigure them: # dselect update # debconf-set-selections < debconfsel.txt # dpkg --set-selections <myselections # apt-get -u dselect-upgrade # or dselect install C. A variation on the above I've seen a lot, this from stackoverflow: dpkg --get-selections > package_list then on the new install: cat package_list | sudo dpkg --set-selections && sudo apt-get dselect-upgrade I don't really understand B, or why it's slightly different than many others.

    Read the article

  • loading 60 images locally fast is it possible...? [closed]

    - by Tariq- iPHONE Programmer
    when my app starts it loads 60 images at a time in UIImageView and it also loads a background music. in simulator it works fine but in IPAD it crashes.. -(void)viewDidLoad { NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; //img.animationImages = [[NSArray arrayWithObjects: MyImages = [NSArray arrayWithObjects: //[UIImage imageNamed: @"BookOpeningB001.jpg"],...... B099,nil]; //[NSTimer scheduledTimerWithTimeInterval: 8.0 target:self selector:@selector(onTimer) userInfo:nil repeats:NO]; img.animationImages = MyImages; img.animationDuration = 8.0; // seconds img.animationRepeatCount = 1; // 0 = loops forever [img startAnimating]; [self.view addSubview:img]; [img release]; [pool release]; //[img performSelector:@selector(displayImage:) ]; //[self performSelector:@selector(displayImage:) withObject:nil afterDelay:10.0]; [self performSelector: @selector(displayImage) withObject: nil afterDelay: 8.0]; } -(void)displayImage { SelectOption *NextView = [[SelectOption alloc] initWithNibName:nil bundle:nil]; [self presentModalViewController:NextView animated:NO]; [NextView release]; } Am i doing anything wrong ? Is there any other alternative to load the local images faster in UIImageView !

    Read the article

  • Why would more CPU cores on virtual machine slow compile times?

    - by Sid
    [edit#2] If anyone from VMWare can hit me up with a copy of VMWare Fusion, I'd be more than happy to do the same as a VirtualBox vs VMWare comparison. Somehow I suspect the VMWare hypervisor will be better tuned for hyperthreading (see my answer too) I'm seeing something curious. As I increase the number of cores on my Windows 7 x64 virtual machine, the overall compile time increases instead of decreasing. Compiling is usually very well suited for parallel processing as in the middle part (post dependency mapping) you can simply call a compiler instance on each of your .c/.cpp/.cs/whatever file to build partial objects for the linker to take over. So I would have imagined that compiling would actually scale very well with # of cores. But what I'm seeing is: 8 cores: 1.89 sec 4 cores: 1.33 sec 2 cores: 1.24 sec 1 core: 1.15 sec Is this simply a design artifact due to a particular vendor's hypervisor implementation (type2:virtualbox in my case) or something more pervasive across more VMs to make hypervisor implementations more simpler? With so many factors, I seem to be able to make arguments both for and against this behavior - so if someone knows more about this than me, I'd be curious to read your answer. Thanks Sid [edit:addressing comments] @MartinBeckett: Cold compiles were discarded. @MonsterTruck: Couldn't find an opensource project to compile directly. Would be great but can't screwup my dev env right now. @Mr Lister, @philosodad: Have 8 hw threads, using VirtualBox, so should be 1:1 mapping without emulation @Thorbjorn: I have 6.5GB for the VM and a smallish VS2012 project - it's quite unlikely that I'm swapping in/out trashing the page file. @All: If someone can point to an open source VS2010/VS2012 project, that might be a better community reference than my (proprietary) VS2012 project. Orchard and DNN seem to need environment tweaking to compile in VS2012. I really would like to see if someone with VMWare Fusion also sees this (for VMWare vs VirtualBox compartmentalization) Test details: Hardware: Macbook Pro Retina CPU : Core i7 @ 2.3Ghz (quad core, hyper threaded = 8 cores in windows task manager) Memory : 16 GB Disk : 256GB SSD Host OS: Mac OS X 10.8 VM type: VirtualBox 4.1.18 (type 2 hypervisor) Guest OS: Windows 7 x64 SP1 Compiler: VS2012 compiling a solution with 3 C# Azure projects Compile times measure by VS2012 plugin called 'VSCommands' All tests run 5 times, first 2 runs discarded, last 3 averaged

    Read the article

  • mailman not relaying email to external address

    - by gozzilli
    I have a setup of mailman with postfix on an ubuntu server 12.04. My problem is that mailing list emails are not forwarded to email addresses external to my institution. However the initial welcome email is received by everyone, internally and externally. in fact, a simple email from command line with mail is successfully sent to anyone after that, mailing list emails are only forwarded to internal addresses. the domain name I'm using for the server is not that of my institution who is hosting the server. Here is my main.cf: myorigin = sub.myinstitution.tld mynetworks = 127.0.0.0/8 xxx.xxx.xxx.xxx/16 # this is my institution ip range relayhost = smtp.myinstitution.tld inet_interfaces = loopback-only local_transport = error:local delivery is disabled virtual_alias_maps = hash:/etc/postfix/virtual smtpd_recipient_restrictions = permit_mynetworks myhostname = mywebsite.tld mydestination = $myhostname, localhost.$mydomain, localhost I also found these two links on serverfault and ubuntu forums, but neither of these solutions seem to do the trick for me. Any help would be much appreciated.

    Read the article

  • Why do Windows 7 & 8 have different default behaviour when trying to modify contents of protected folder

    - by Ben
    Here's the situation: I have a Windows 7 PC and a Windows 8 PC and I'm logged in as the same domain user on both machines. My domain user is in the local Administrator group on both. When I run cmd.exe on each machine and then attempt to do this (also on both machines) mkdir "c:\Program Files\cheese" the Windows 8 PC gives an "Access Denied" error, while it works fine on the Windows 7 PC. I understand that C:\Program Files is a protected folder and I'm not interested in a debate on the morals of writing to such a folder directly. But I am interested in understanding what exactly has changed in Windows 8 to cause this. I don't seem to be able to find anything that acknowledges or explains this change in behaviour in Windows 8.

    Read the article

  • Setting up WAMP to run on a LAN

    - by Steve
    I've installed WAMP on a Windows 7 PC, and it is running fine locally, as localhost. I want PCs on the LAN to be able to view the local server. When they load my PC's IP address in their browser, they receive a "You don't have permission to access / on this server" error. I followed this guide, but the issue remains. To recap: I've added an inbound exception to Windows Firewall for port 80 for Private and Domain connections. I've edited Apache's httpd.conf to include: Listen 80 Listen 192.168.0.5:80 < Directory "c:/wamp/www/wordpress/" allow from all < /Directory I've edited httpd-vhosts.conf to include: < VirtualHost 192.168.0.5:80 DocumentRoot "C:/wamp/www/wordpress" < /VirtualHost Any ideas?

    Read the article

  • Moving a Drupal between linux servers, best practice to avoid file-ownership problems

    - by zero
    I want to port over a Drupal commons 6x24 from a local LAMP-stack to a production webserver. Both systems run OpenSuse Linux. How do I do this, what are the most important steps. How should I handle file-ownership. It's important for me to have to have full control of the file ownership. If I use the wwwrun account, I frequently run into problems, due to a very strict webserver-admin. See for example the long history of looking for fixes and solutions see this thread and even more interesting see this very long and impressive thread here. All troubles I run into have to do with file-owernship and permissions. This is my current setup; Note: This was just a quick hacked installation - quick and dirty. Well my interest is after the general options i have in the port of a drupal from linux to linux linux-vi17:/srv/www/htdocs/com624 # ls -l insgesamt 224 -rwxrwxrwx 1 root www 45285 19. Jan 00:54 CHANGELOG.txt -rwxrwxrwx 1 root www 925 19. Jan 00:54 COPYRIGHT.txt -rwxrwxrwx 1 root www 206 19. Jan 00:54 cron.php drwxrwxrwx 2 root www 4096 19. Jan 00:54 includes -rwxrwxrwx 1 root www 923 19. Jan 00:54 index.php -rwxrwxrwx 1 root www 1244 19. Jan 00:54 INSTALL.mysql.txt -rwxrwxrwx 1 root www 1011 19. Jan 00:54 INSTALL.pgsql.txt -rwxrwxrwx 1 root www 47073 19. Jan 00:54 install.php -rwxrwxrwx 1 root www 15572 19. Jan 00:54 INSTALL.txt -rwxrwxrwx 1 root www 14940 19. Jan 00:54 LICENSE.txt -rwxrwxrwx 1 root www 1858 19. Jan 00:54 MAINTAINERS.txt drwxrwxrwx 3 root www 4096 19. Jan 00:54 misc drwxrwxrwx 35 root www 4096 19. Jan 00:54 modules drwxrwxrwx 4 root www 4096 19. Jan 00:54 profiles -rwxrwxrwx 1 root www 1470 19. Jan 00:54 robots.txt drwxrwxrwx 2 root www 4096 19. Jan 00:54 scripts drwxrwxrwx 4 root www 4096 19. Jan 00:54 sites drwxrwxrwx 7 root www 4096 19. Jan 00:54 themes -rwxrwxrwx 1 root www 26250 19. Jan 00:54 update.php -rwxrwxrwx 1 root www 4864 19. Jan 00:54 UPGRADE.txt -rwxrwxrwx 1 root www 294 19. Jan 00:54 xmlrpc.php linux-vi17:/srv/www/htdocs/com624 # thx to BetaRides answer here a quick overview on the drush functionality with rsync http://drush.ws/ core-rsync Rsync the Drupal tree to/from another server using ssh. Examples: drush rsync @dev @stage Rsync Drupal root from dev to stage (one of which must be local). drush rsync ./ @stage:%files/img Rsync all files in the current directory to the 'img' directory in the file storage folder on stage. Arguments: source May be rsync path or site alias. See rsync documentation and example.aliases.drushrc.php. destination May be rsync path or site alias. See rsync documentation and example.aliases.drushrc.php. Options: --mode The unary flags to pass to rsync; --mode=rultz implies rsync -rultz. Default is -az. --RSYNC-FLAG Most rsync flags passed to drush sync will be passed on to rsync. See rsync documentation. --exclude-conf Excludes settings.php from being rsynced. Default. --include-conf Allow settings.php to be rsynced --exclude-files Exclude the files directory. --exclude-sites Exclude all directories in "sites/" except for "sites/all". --exclude-other-sites Exclude all directories in "sites/" except for "sites/all" and the site directory for the site being synced. Note: if the site directory is different between the source and destination, use --exclude-sites followed by "drush rsync @from:%site @to:%site" --exclude-paths List of paths to exclude, seperated by : (Unix-based systems) or ; (Windows). --include-paths List of paths to include, seperated by : (Unix-based systems) or ; (Windows). Topics: docs-aliases Site aliases overview with examples Aliases: rsync

    Read the article

  • hdmi audio works only with aplay -D alsa test wavs; open source radeon drivers; kernel 3.5 vgaswitcheroo

    - by user108754
    I've trolled the internets to make hdmi work on my system Ubuntu 12.04 software center kernel 3.5 uname: Linux ubuntu 3.5.0-18-generic #29~precise1-Ubuntu SMP...x86_64 x86_64 x86_64 GNU/Linux open source radeon drivers vgaswitcheroo (hybrid intel/radeon gpu): I boot with intel, not radeon, running. (and recall that with kernel 3.5, vgaswitcheroo now gives info on a third item, "DIS-Audio"; it indicates pwr on my system) ( /etc/rc.local: chown user:user /sys/kernel/debug/ # change "username" with your user name echo OFF /sys/kernel/debug/vgaswitcheroo/switch ) grub indeed now has "radeon.audio=1" for testing audio, I did aplay -l which gave me the card and device, which made me try aplay -D plughw:1,3 /usr/share/sounds/alsa/Front_Center.wav and lo! I get crystal clear sound on my hdtv. If I play an mp3 file as the argument to that command, I get noise as, I guess, aplay interprets the mp3 code as a wav. If I play a .wav that is not in the /usr/share/sounds/alsa/ directory, I get nothing. Internet flash video in browser plays no sound over hdmi. Both system sounds control and pavucontrol have hdmi cedar selected. Alas, I can not get sound for any gui test (left, right). Why would only aplay, and only when directed with "-D plughw", yield sound over hdmi? I've also tried only using one sound program at a time, if it was a limitation of alsa, so I tried aplay with web browser and even the sound control gui closed. I tried each of the last two, running alone. No improvement. alsamixer only shows hda intel and I think it's only the intel audio, not the hdmi.

    Read the article

  • Accessing network resources via vpn connection failes

    - by LikeHoo
    I already found some information on this problem here, but I still can't get it to work. Im trying to access some network resources on my server via vpn over the net. Im using a win7 home pc here and a win server 2008 rc2 with installed ras&routing role on the server. For vpn authentication I use a local user on the server with vpn-access. This user also has the rights to access to the network resources, but it neither finds the server under network nor is it able to connect the network drives... In similar topics here I found out that using the same credentials for vpn-authentication and network resources access does not work, but using different user for access didn't work either. All of these examples I found were in an active directory structure, but I don't have an active directory here. Does anyone know how to solve this problem without having to use an active directory? Thanks

    Read the article

  • MacBook Pro screen goes dark

    - by Mike M
    I've had my MacBook Pro for two years now; no problems so far (it has had 3rd party RAM from the get go). Today, I'm copying a particularly large VM from an External disk drive to local MacBook disk. It has about 3GB to go and I take off to do some other things and when I come back my screen is "dark". The computer is still on but I can't see anything. I forced a reboot by holding down the power button, it starts up with the "chimes", but still no screen. I've done this several times. Any ideas? Do you think the hard disk activity caused it to get too hot?

    Read the article

  • How do I use an SSH public key from a remote machine?

    - by kubi
    Setup The public keys are set up on a Macbook. I can do a passwordless push to github and a server (iMac) on the local network. The Problem I know the keys are partially setup correctly, because I everything works if I'm sitting at the Macbook. What doesn't work is when I SSH into the Macbook remotely and attempt to push to github or to the iMac server. I'm prompted to input my SSH key passphrase. What am I missing to enable pushing to github from the Macbook while logged in remotely from the iMac?

    Read the article

  • Nginx reverse proxy IP issue

    - by Tiffany Walker
    For some reason Apache is still seeing my SERVERS ip. Is this an nginx problem? /etc/nginx.conf user nobody; # no need for more workers in the proxy mode worker_processes 4; error_log /var/log/nginx/error.log info; worker_rlimit_nofile 20480; events { worker_connections 5120; # increase for busier servers use epoll; # you should use epoll here for Linux kernels 2.6.x } http { server_name_in_redirect off; server_names_hash_max_size 10240; server_names_hash_bucket_size 1024; include mime.types; default_type application/octet-stream; server_tokens off; disable_symlinks if_not_owner; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 5; gzip on; gzip_vary on; gzip_disable "MSIE [1-6]\."; gzip_proxied any; gzip_http_version 1.1; gzip_min_length 1000; gzip_comp_level 6; gzip_buffers 16 8k; # You can remove image/png image/x-icon image/gif image/jpeg if you have slow CPU gzip_types text/plain text/xml text/css application/x-javascript application/xml image/png image/x-icon image/gif image/jpeg application/xml+rss text/javascript application/atom+xml; ignore_invalid_headers on; client_header_timeout 3m; client_body_timeout 3m; send_timeout 3m; reset_timedout_connection on; connection_pool_size 256; client_header_buffer_size 256k; large_client_header_buffers 4 256k; client_max_body_size 200M; client_body_buffer_size 128k; request_pool_size 32k; output_buffers 4 32k; postpone_output 1460; proxy_temp_path /tmp/nginx_proxy/; client_body_in_file_only on; log_format bytes_log "$msec $bytes_sent ."; include "/etc/nginx/vhosts/*"; } proxy.inc proxy_connect_timeout 59s; proxy_send_timeout 600; proxy_read_timeout 600; proxy_buffer_size 64k; proxy_buffers 16 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; proxy_pass_header Set-Cookie; proxy_redirect off; proxy_hide_header Vary; proxy_set_header Accept-Encoding ''; proxy_ignore_headers Cache-Control Expires; proxy_set_header Referer $http_referer; proxy_set_header Host $host; proxy_set_header Cookie $http_cookie; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; vhost file: server { error_log /var/log/nginx/vhost-error_log warn; listen 63.6.1.12:80; server_name photo-rolldomain.com www.domain.com; access_log /usr/local/apache/domlogs/domain.com-bytes_log bytes_log; access_log /usr/local/apache/domlogs/domain.com combined; root /home/mtech/public_html; location / { location ~.*\.(3gp|gif|jpg|jpeg|png|ico|wmv|avi|asf|asx|mpg|mpeg|mp4|pls|mp3|mid|wav|swf|flv|html|htm|txt|js|css|exe|zip|tar|rar|gz|tgz|bz2|uha|7z|doc|docx|xls|xlsx|pdf|iso)$ { expires 7d; try_files $uri @backend; } error_page 405 = @backend; add_header X-Cache "HIT from Backend"; proxy_pass http://63.6.1.12:8081; include proxy.inc; } location @backend { internal; proxy_pass http://63.6.1.12:8081; include proxy.inc; } location ~ .*\.(php|jsp|cgi|pl|py)?$ { proxy_pass http://63.6.1.12:8081; include proxy.inc; } location ~ /\.ht { deny all; } }

    Read the article

  • Route an IP from WAN to a host on LAN on OpenWRT

    - by Zsub
    EDIT: I know how to use NAT, I specifically want the server to be reachable on two IP's, one private, one public, with the firewall of the OpenWRT in between, if feasible. At the office we have recieved a /29 from our ISP. The first address is reserved for their endpoint, so I'm free to use five addresses. We run a local network, so of course there is a router in between running OpenWRT to provide all hosts with (W)LAN (dhcp from a private range). However, we also have a server running OS X Server 10.6 (Snow Leopard) and I'd like that server to be accessible both from the LAN using a private IP as well as from the WAN on it's own public IP. Point of note is that the server only has one network port, so multiple NICs is not an option, unfortunately. How would I go about doing this?

    Read the article

  • Slides, Materials, and Pictures from SharePoint Saturday Virginia Beach 2011

    - by Brian Jackett
    This past weekend I presented “Managing SharePoint 2010 Farms with PowerShell” and “SharePoint 2010 and Integrating Line of Business Applications” SharePoint Saturday Virginia Beach.  A big thanks to everyone who attended my sessions.  I had a great time presenting, getting to meet new folks, and exploring a little bit of the local life.  Below are slides, materials, and pictures from the event.  Let me know if you have any comments, questions, or feedback.  Thanks. Slides and Materials     Managing SharePoint 2010 Farms with PowerShell     SharePoint 2010 and Integrating Line of Business Applications Photos Pictures on Facebook     Click Here Pictures on Windows Live (higher res) SharePoint Saturday Virginia Beach Jan 2011 VIEW SLIDE SHOW DOWNLOAD ALL   Side Note: SavePSToSP CodePlex Project     During my “Managing SharePoint 2010 Farms with PowerShell” I made mention of a CodePlex project I am working on called SavePSToSP.  Click here for the link to that project.  I have been pushing out updates roughly once a month or more.  If you have any feedback or find it helpful feel free to let me know.         -Frog Out

    Read the article

  • Emaroo 1.4.0 Released

    - by WeigeltRo
    Emaroo is a free utility for browsing most recently used (MRU) lists of various applications. Quickly open files, jump to their folder in Windows Explorer, copy their path - all with just a few keystrokes or mouse clicks. tl;dr: Emaroo 1.4.0 is out, go download it on www.roland-weigelt.de/emaroo   Why Emaroo? Let me give you a few examples. Let’s assume you have pinned Emaroo to the first spot on the task bar so you can start it by hitting Win+1. To start one of the most recently used Visual Studio solutions you type Win+1, [maybe arrow key down a few times], Enter This means that you can start the most recent solution simply by Win+1, Enter What else? If you want to open an Explorer window at the file location of the solution, you type Ctrl+E instead of Enter.   If you know that the solution contains “foo” in its name, you can type “foo” to filter the list. Because this is not a general purpose search like e.g. the Search charm, but instead operates only on the MRU list of a single application, you usually have to type only a few characters until you can press Enter or Ctrl+E.   Ctrl+C copies the file path of the selected MRU item, Ctrl+Shift+C copies the directory If you have several versions of Visual Studio installed, the context menu lets you open a solution in a higher version.   Using the context menu, you can open a Visual Studio solution in Blend. So far I have only mentioned Visual Studio, but Emaroo knows about other applications, too. It remembers the last application you used, you can change between applications with the left/right arrow or accelerator keys. Press F1 or click the Emaroo icon (the tab to the right) for a quick reference. Which applications does Emaroo know about? Emaroo knows the MRU lists of Visual Studio 2008/2010/2012/2013 Expression Blend 4, Blend for Visual Studio 2012, Blend for Visual Studio 2013 Microsoft Word 2007/2010/2013 Microsoft Excel 2007/2010/2013 Microsoft PowerPoint 2007/2010/2013 Photoshop CS6 IrfanView (most recently used directories) Windows Explorer (directories most recently typed into the address bar) Applications that are not installed aren’t shown, of course. Where can I download it? On the Emaroo website: www.roland-weigelt.de/emaroo Have fun!

    Read the article

  • HTML5 media loading sometimes suspends or aborts: misconfigured Apache?

    - by Joan Botella
    Recently, some code that has been working fine for months started to run unexpectedly. That code is just a media files loading JavaScript function, that uses jQuery. It's pretty long, but in essence it is like this: var $audio=$('<audio>'); $audio.on('canplaythrough',function(e){ $audio[0].play(); }); $audio.attr('src','song.ogg'); Basically, the file only loads sometimes, and sometimes stops loading with a suspend or even an abort event. I have uploaded a little testing HTML to http://www.joanbotella.com/tests/loading , where you can see what's happening. You can download the test files from http://www.joanbotella.com/tests/loading/loadingTest.zip for local testing. I have just checked that opening the test index.html file directly into Firefox, and not through my localhost Apache server, makes the audio files perfectly playable. So, I assume, my hosting and I have the Apache server misconfigured for serving media files. My software versions are: Apache 2.2.22-1ubuntu1.7 , Mozilla Firefox 31.0 , Chromium 36.0.1985.125 and jQuery 1.11.0. Can you help me? Thanks in advance!

    Read the article

  • South Florida Stony Brook Alumni &amp; Friends Reception 2011

    - by Sam Abraham
    It’s official, we are kicking off a local South Florida Chapter for Stony Brook alumni and friends in the area to keep in touch.  Our first networking event will be taking place at Champps, Ft Lauderdale on November 17th, 6:00-8:00 PM. Admission is free and open for everyone, whether or not they are Stony Brook Alums. The team at Champps is offering us great specials (Happy hour deals, half-price appetizers,etc.) that we can choose to enjoy while we network and catch up. (Event Announcement: http://alumniandfriends.stonybrook.edu/page.aspx?pid=299&cid=1&ceid=171&cerid=0&cdt=11%2f17%2f2011) I look forward to share and revive my college experience which I believe was the starting line of my ongoing life journey. It would be also great to hear others’ take as they reflect on their experiences throughout their college years. I invite anyone interested in keeping in touch with friends and alums of Stony Brook to join our LinkedIn or Facebook groups.   The Stony Brook Alumni Association – South Florida Chapter LinkedIn Group: http://www.linkedin.com/groups?gid=3665306&trk=myg_ugrp_ovr The Stony Brook Alumni Association – South Florida Chapter Facebook Group: http://www.facebook.com/#!/groups/114760941910314/

    Read the article

  • Caching Reverse-Proxy ISP Host for a Low-Bandwidth Server

    - by Casey
    I am building a webcam w/ HTTP server that will be running from a low-bandwith connection. The content on the site will be changing every 5 to 10 minutes. Instead of serving files directly from this connection, are there hosting companies that can act as a reverse proxy for my site? Therefore, if nobody is using the site, the local internet connection remains idle. And if I receive 1000 hits all at the same time, only one HTTP GET is required, and the hosting company (on a fat pipe) continues serving the other 999 requests? This doesn't sound like a very common usage model, but I feel like this would be the optimal solution to my situation.

    Read the article

  • Loading main javascript on every page? Or breaking it up to relevant pages?

    - by Kyle
    I have a 700kb decompressed JS file which is loaded on every page. Before I had 12 javascript files on each page but to reduce http requests I compressed them all into 1 file. This file is ~130kb gzipped and is served over gzip. However on the local computer it is still unpacked and loaded on every page. Is this a performance issue? I've profiled the javascript with firebug profiler but did not see any issues. The problem/illusion I am facing is there are jquery libraries compressed in that file that are sometimes not used on the current page. For example jquery datatables is 200kb compressed and that is only loaded on 2 of my website pages. Another is jqplot and that is another 200kb. I now have 400kb of excess code that isn't executed on 80% of the pages. Should I leave everything in 1 file? Should I take out the jquery libraries and load only relevant JS on the current page?

    Read the article

  • Rant - Why is Windows Azure not available in Africa?

    - by Allan Rwakatungu
    Yesterday at the .NET user group meeting in Kampala Uganda  I gave a talk on cloud computing with Windows Azure  (details will be in my next blog post). The guys where excited. Without owning they own inftrastucture and at low cost they can build scalable , highly available applications. Not quite. Azure accounts are only available to people in particular countries - none from Africa. I attended PDC in 2008 when Microsoft unleashed Windows Azure. One of the case studies to show the benefits ofr cloud computing was a project in Africa for an education service in Ethiopia. The point they where making was that the cloud was perfect for scenarios where computing infrastructure is not sophiscated, like Ethiopia. Perfect , i thought. So i got my beta account from PDC and started playing around in the cloud. Then Azure goes live , my beta account does not work any more and I cant pay because am from Uganda. Microsoft , this sucks. I dont know the reasons for Microsoft doing this, but am sure we can work out something. We in Africa need the cloud more than anybody else in the world. Setting up data centers that are higly scalable and available for our startups is not an option we have. But we also cant pay for cloud computing with Microsoft. Microsoft, we know we are a tiny insigficant market for a company your size, but your excluding us only continues to widen the digital divide. Microsoft , how about you have a reseller model for cloud computing. Instead of trying to deal direclty with each client you have local partners who help you sell and bill your cloud services. I think that would lead to Windows Azure being available in Africa. I can help you resell in Uganda.

    Read the article

  • Can Clojure's thread-based agents handle c10k performance?

    - by elliot42
    I'm writing a c10k-style service and am trying to evaluate Clojure's performance. Can Clojure agents handle this scale of concurrency with its thread-based agents? Other high performance systems seem to be moving towards async-IO/events/greenlets, albeit at a seemingly higher complexity cost. Suppose there are 10,000 clients connected, sending messages that should be appended to 1,000 local files--the Clojure service is trying to write to as many files in parallel as it can, while not letting any two separate requests mangle the same single file by writing at the same time. Clojure agents are extremely elegant conceptually--they would allow separate files to be written independently and asynchronously, while serializing (in the database sense) multiple requests to write to the same file. My understanding is that agents work by starting a thread for each operation (assume we are IO-bound and using send-off)--so in this case is it correct that it would start 1,000+ threads? Can current-day systems handle this number of threads efficiently? Most of them should be IO-bound and sleeping most of the time, but I presume there would still be a context-switching penalty that is theoretically higher than async-IO/event-based systems (e.g. Erlang, Go, node.js). If the Clojure solution can handle the performance, it seems like the most elegant thing to code. However if it can't handle the performance then something like Erlang or Go's lightweight processes might be preferable, since they are designed to have tens of thousands of them spawned at once, and are only moderately more complex to implement. Has anyone approached this problem in Clojure or compared to these other platforms? (Thanks for your thoughts!)

    Read the article

< Previous Page | 720 721 722 723 724 725 726 727 728 729 730 731  | Next Page >