Search Results

Search found 21343 results on 854 pages for 'pass by reference'.

Page 616/854 | < Previous Page | 612 613 614 615 616 617 618 619 620 621 622 623  | Next Page >

  • HP Procurve Issue Passing Multiple VLANs over a link

    - by MichaelRwat
    Just to start off with I am a Cisco guy that got placed into an HP project. Basic topology overview from outside in: ASA 5505 with two Ethernet connections to a 2910-24 port switch. This switch is then (Cisco Trunking) to a 2626 switch passing vlan (1 untagged and 100 tagged)between them. I created SVI's on each of the switches for both VLAN's for testing purposes. I can not get vlan 100 to pass across this link. I also have trunks configured to AP's off of the switch and can not ping the vlan 100 BVI on the AP's but can reach the vlan 1 BVI. Port 25 on Access layer (2626) connects (trunks) with port A1 of 2610. STP is not running at all on any switch (this is not my network I can't change this nor did I design this) Distribution Sw: MP1-0# show run ip default-gateway 10.100.100.100 vlan 1 name "DATA" untagged 1-22,24-A1,B1 ip address 10.100.100.6 255.255.255.0 no untagged 23 exit vlan 100 name "GUEST" untagged 23 tagged 24-A1 ip address 10.100.102.6 255.255.255.0 exit Access Sw: ip default-gateway 10.100.100.100 vlan 1 name "DEFAULT_VLAN" untagged 1-26 ip address 10.100.100.5 255.255.255.0 exit vlan 100 name "GUEST" ip address 10.100.102.5 255.255.255.0 tagged 15,25 exitt From the ASA I can ping the vlan 100 address of the 2610 but not the 2626 (10.100.102.6)[Not passing the "trunk"] If I plug into an access port vlan 100 of the 2626 I can ping the SVI for vlan 100 as intended. I can not ping across the "trunk" over vlan 100 but I can across vlan 1. There may be something obvious I'm missing but please review my configuration and thank you for the assistance.

    Read the article

  • FTP on Linux "Failed to retrieve directory listing" not firewall issue

    - by Jaka Prasnikar
    I've got an VPS in germany running Debian X64. I have very strange issue. I have ISPConfig CP installed using proftpd and I can not connect to FTP by any means. Few hours ago I've had installed DirectAdmin on CentOS same VPS and same issue. Simply when I connect to FTP server I get these: Status: Resolving address of web02.defikon.com Status: Connecting to 130.255.190.71:21... Status: Connection established, waiting for welcome message... Response: 220---------- Welcome to Pure-FTPd [privsep] [TLS] ---------- Response: 220-You are user number 1 of 50 allowed. Response: 220-Local time is now 12:15. Server port: 21. Response: 220-This is a private system - No anonymous login Response: 220-IPv6 connections are also welcome on this server. Response: 220 You will be disconnected after 15 minutes of inactivity. Command: USER default1 Response: 331 User default1 OK. Password required Command: PASS ****** Response: 230-User default1 has group access to: client0 sshusers Response: 230 OK. Current restricted directory is / Command: OPTS UTF8 ON Response: 200 OK, UTF-8 enabled Status: Connected Status: Retrieving directory listing... Command: PWD Response: 257 "/" is your current location Command: TYPE I Response: 200 TYPE is now 8-bit binary Command: PASV Error: Connection timed out Error: Failed to retrieve directory listing I even tried telnet localhost 21 and the same happends. Once I issue command "LIST" I get time out. I've tried every thing and I can't get this to work =( Please help ! P.S.: iptables is turned off.

    Read the article

  • Apache mod_rewrite and mod_vhost_alias Virtual Hosts and %1

    - by Matt Wall
    I have put the main bits of my httpd.conf down below. I am using %1 to get the host field so I can dynamically add vhosts by just creating dns/folders. One problem is I need to reference this: HttpStreamingLiveEventPath "D:/FMSApps/%1" HttpStreamingContentPath "D:/FMSApps/%1" In Apache when I try say to do this: http://test.domain.com/hds-vod/myfile.mp4.f4m it sees the %1 in the logs, and fails. Apache gives me this: [error] mod_jithttp [403]: No access to D:/Content/%1/DefaultContent/eve.mp4 What I'm looking for is the D:/Content/%1/DefaultContent/eve.mp4 to become D:/Content/test/DefaultContent/eve.mp4 Anyone have any useful resources / hints etc. to help me? Meanwhile my Google searching continues...! Listen 80 ServerName main1.rtmphost.com AccessFileName .htaccess ServerSignature On UseCanonicalName Off HostnameLookups Off Timeout 120 KeepAlive On MaxKeepAliveRequests 100 KeepAliveTimeout 15 RewriteLogLevel 0 RewriteLog logs/rewrite.log DocumentRoot D:/Content LoadModule vhost_alias_module modules/mod_vhost_alias.so VirtualDocumentRoot "D:/Content/%1" RewriteEngine On <Directory /> Options None AllowOverride None Order allow,deny Allow from all Satisfy all </Directory> <IfModule f4fhttp_module> <Location /vod> HttpStreamingEnabled true HttpStreamingContentPath "D:/FMSApps/%1" Options FollowSymLinks </Location> Redirect 301 /live/events/livepkgr/events /hds-live/livepkgr <Location /hds-live> HttpStreamingEnabled true HttpStreamingLiveEventPath "D:/FMSApps/%1" HttpStreamingContentPath "D:/FMSApps/%1" HttpStreamingF4MMaxAge 2 HttpStreamingBootstrapMaxAge 2 HttpStreamingFragMaxAge -1 Options FollowSymLinks </Location> </IfModule>

    Read the article

  • Bacula v5.0.2 Windows Installation Issues

    - by JohnyD
    First off, I am very new to Bacula but I'm very intriqued from what I've read. I'm looking to set up Bacula 5.0.2 on a Windows 2008 R2 server. I've run the installer and at the end it asks me to configure DIR name, DIR password, DIR Address. Windows documentation is somewhat hard to come by and I'm not certain what exactly I'm supposed to enter here. Do I need to create a local account that matches this info? Will the installation process create the account for me? Will this be the account that handles the FD daemon/service? I'm also not certain if Address means network location or local direcory. I apologize for my ignorance. Currently I'm trying to use the following information: Name: john pass: john address: thin1 (server name although I have also tried thin1.fqdm.local and 10.0.0.104) This info allows for the installer to complete successfully. However, when I run the BAT it hangs at, "Connecting to Director thin1:9101". The Bacula File Service is currently running under the local system account. What am I doing wrong? What do I have yet to do? Once I get this working properly I assume I will need to install clients on all my Windows boxes? Also, this is a 64-bit cpu but I am installing the 32-bit client. Are there any issues with this? Should I be using the 32-bit client? Thanks very much for the help.

    Read the article

  • mcelog doesn't fails to start PUIAS 6.4 amd hardware

    - by Predrag Punosevac
    Folks, I am a total Linux n00b. I am trying to deploy mcelog on one of my computing nodes running PUIAS 6.4 (i86_64) [root@lov3 edac]# uname -a Linux lov3.mylab.org 2.6.32-358.18.1.el6.x86_64 #1 SMP Tue Aug 27 22:40:32 EDT 2013 x86_64 x86_64 x86_64 GNU/Linux a free clone of Red Hat 6.4 on AMD hardware [root@lov3 mcelog]# lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 64 On-line CPU(s) list: 0-63 Thread(s) per core: 2 Core(s) per socket: 8 Socket(s): 4 NUMA node(s): 8 Vendor ID: AuthenticAMD CPU family: 21 Model: 2 Stepping: 0 CPU MHz: 1400.000 BogoMIPS: 4999.30 Virtualization: AMD-V L1d cache: 16K L1i cache: 64K L2 cache: 2048K L3 cache: 6144K NUMA node0 CPU(s): 0-7 NUMA node1 CPU(s): 8-15 NUMA node2 CPU(s): 16-23 NUMA node3 CPU(s): 24-31 NUMA node4 CPU(s): 32-39 NUMA node5 CPU(s): 40-47 NUMA node6 CPU(s): 48-55 NUMA node7 CPU(s): 56-63 My mcelog.conf file is more or less default apart of the fact that I would like to run mcelog as a daemon and to log errors. When I start mcelog [root@lov3 mcelog]# mcelog --config-file mcelog.conf AMD Processor family 21: Please load edac_mce_amd module. However the module is present [root@lov3 mcelog]# locate edac_mce_amd.ko /lib/modules/2.6.32-358.18.1.el6.x86_64/kernel/drivers/edac/edac_mce_amd.ko /lib/modules/2.6.32-358.el6.x86_64/kernel/drivers/edac/edac_mce_amd.ko and loaded [root@lov3 edac]# lsmod | grep mce edac_mce_amd 14705 1 amd64_edac_mod Is there anything that I can do to get mcelog working? The only reference I found is this thread http://lists.centos.org/pipermail/centos/2012-November/130226.html

    Read the article

  • How can MySQL be in GDAL's dependencies when it's already installed?

    - by Julien Fouilhé
    I'm trying to install GDAL on my CentOS 64 bits server to be able to make some GIS operations. I tried a simple: # yum install gdal First, the GDAL version is 1.4 (the last released one is 1.9) Then, I see in the dependencies list mysql. But I have mysql already installed, from another repository (remi), with a newer version than the one suggested by yum... Is it a problem of architecture (yum suggests i386)? I risked a yes, but still impossible to install it! Here's the error I have. Transaction Check Error: package mysql-5.5.28-1.el5.remi.x86_64 (which is newer than mysql-5.0.95-1.el5_7.1.i386) is already installed Then, I tried to install it from sources with last version available (1.9.2). I downloaded the GDAL tar.gz, extracted the files and installed it like following: # tar -xzf gdal-1.9.2.tar.gz # ./configure --with-static-proj4=/usr/local/lib --with-threads --with-libtiff=internal --with-geotiff=internal --with-jpeg=internal --with-gif=internal --with-png=internal --with-libz=internal # make # make install But during the make, I have some strange errors displaying, about RegisterOGRMySQL, that I can't understand: chmod a+x gdal-config /bin/sh /home/benjamin/gdal-1.9.2/libtool --mode=link g++ gdalinfo.lo /home/benjamin/gdal-1.9.2/libgdal.la -o gdalinfo libtool: link: g++ .libs/gdalinfo.o -o .libs/gdalinfo /home/benjamin/gdal-1.9.2/.libs/libgdal.so -L/usr/local/lib/lib -L/usr/kerberos/lib64 -lproj -lsqlite3 /usr/lib64/libexpat.so -lpthread -lrt -lcurl -ldl -lgssapi_krb5 -lkrb5 -lk5crypto -lcom_err -lidn -lssl -lcrypto -lz -Wl,-rpath -Wl,/usr/local/lib -Wl,-rpath -Wl,/usr/lib64 /home/benjamin/gdal-1.9.2/.libs/libgdal.so: undefined reference to `RegisterOGRMySQL' collect2: ld returned 1 exit status make[1]: *** [gdalinfo] Error 1 make[1]: Leaving directory `/home/benjamin/gdal-1.9.2/apps' make: *** [apps-target] Error 2 Has anyone a solution? Thanks a lot!

    Read the article

  • Ping: sendmsg: operation not permitted error after installing iptables on Arch GNU/Linux

    - by estol
    Yesterday I got a new computer as my homeserver, a HP Proliant Microserver. Installed Arch Linux on it, with kernel version 3.2.12. After installing iptables (1.4.12.2 - the current version afaik) and changing the net.ipv4.ip_forward key to 1, and enabling forwarding in the iptables configuration file (and rebooting), the system cannot use any of its network itnerfaces. Ping fails with Ping: sendmsg: operation not permitted If I remove iptables completely, networking is okay, but I need to share the Internet connection to the local network. eth0 - wan NIC integrated on the motherboard (no idea of vendor, probably HP). eth1 - lan NIC in a pci-express slot (Intel Gigabit CT Desktop http://www.intel.com/content/www/us/en/network-adapters/gigabit-network-adapters/gigabit-ct-desktop-adapter.html) Since it works without iptables(server can access the internet, and I can login with ssh from the internal network), I assume it has something to do with iptables. I do not have much experience with iptables, so I used these as reference (separate from each other of course...): wiki.archlinux.org/index.php/Simple_stateful_firewall#Setting_up_a_NAT_gateway revsys.com/writings/quicktips/nat.html howtoforge.com/nat_iptables On my previous server, I used the revsys guide to set up nat, worked like a charm. Anyone experienced anything like this before? What am I doing wrong? Thanks, estol

    Read the article

  • Remote search system for samba shares

    - by fostandy
    I have several shares residing on a samba server in a small business environment that I would like to provide search facilities for. Ideally this would be something like google desktop with some extra features (see below), but lacking this the idea is to take what I can get, or at least get an idea for what is out there. Using google desktop search as a reference model, the principle additional requirement is that it is usable from clients over the network. In addition there are some other notes (note that none of these are hard requirements) The content is always files, residing on a single server, accessible from samba shares. Standard ms office document fare Also a lot of rars and zips which it is necessary to search inside. Permissions support, allowing for user-based control to reflect current permission access in samba shares. The userbase will remain fairly static, so manual management of users is fine. majority of users will be Windows based I know there are plenty of search indexers out there: beagle and tracker seem to be the most popular. Most do not seem to offer access control and web-based/remote search does not seem to be high priority. I've also seen a recent post on the samba mailing list asking for pretty much the exact same thing. (They mention a product called IBM OmniFind Yahoo! Edition and while their initial reception seems positive, I am pretty skeptical. RHEL 4? Firefox 2? Updated much?) What else is out there? Are you in a similar situation? What do you use?

    Read the article

  • Looking for a comprehensive/"expert" guide to BCD parameters

    - by Stilez
    I'm interested in educating myself about BCD on Windows 8. There are many, many "walkthrough" guides" and "howtos", but I can't find any guides at typical "enthusiast" level covering what each option or argument in a BCD /ENUM dump might mean, and the principles governing how these all work together. Imagine trying to rebuild or debug BCD (including EFI/BIOS variants and recovery/hibernate/memtest sections, and perhaps multiple boot Windows/WinPE/WinRE) from scratch using just BCDedit + DiskPart, and trying to understand rather than just copy/pasting commands. That's roughly the knowledge I'm after. Example questions might be: How is a BCD /ENUM dump to be read, item by item? How do its sections work together? (A lot of guides only show a specific example rather than explaining all the all common args that can exist and what they mean, they don't actually explain how sections work together, or they assume MBR/BIOS/Vista/7 and omit info needed for EFI/GPT/Dynamic disks/8) Partitions are specified by volume letter or as a \Device\HarddiskVolumeNNN. Why does it sometimes show these items as a letter and sometimes as a GUID? What are the practical differences if any? What exactly is syntax like "ramdisk=[C:]\Images\winpe.wim,{ramdiskoptions}" saying, and how will the drive letter "C" be interpreted at runtime in a line like this? Is the drive in such a line always "C:" (most examples assume so) and if not, when wouldn't it be? Many websites state that an sdi device and path may be needed in some sections of BCD, but what is sdi and what are these args doing when they appear? How does the GUID to HDD volume/partition mapping work under EFI/GPT? So that if disks or partitions/volumes change it's clear how one can confirm from basic principles whether data shown in BCD /ENUM ALL is still correct or not. Does anyone know of a suitable reference source for this kind of raw BCD data and structures? Thanks!

    Read the article

  • How to collect the performance data of a server during an unreachable/down period using Nagios?

    - by gsc-frank
    Some time services and host stop responding due to a poor server performance. I mean, if for some reason (could be lot of concurrency services access, a expensive backup execution on the server or whatever that consume tons of server resources) a server performance is very degraded, that could lead that the server isn't capable to establish any "normal network communication" (without trigger whatever standards timeouts defined for such communication). Knowing host's performance data (cpu, memory, ...) in case of available during that period (host is not down and despite of its performance degradation still allow plugins collect performance data) could be very useful for sysadmin to try to determine what cause the problem, or at least, if the host performance was good and don't interfered at all in the host/service down. This problem could be solved using remote active (NRPE) or remote passive (NSCA) if such remote solutions could store (buffered) perf data to be send to central Nagios server when host performance or network outage allow it. I read the doc of both solutions and can't find any reference to such buffer mechanism neither what happened in case that NSCA can't reach Nagios server. Any idea of how solve this lack of info? so useful for forensic analysis. EDIT: My questions isn about which tools I can use to debug perf problems or gather perf data to analysis, but is about how collect (using Nagios) host perf data even during a network outage for its posterior analysis (kind of forensic analysis). The idea is integrate such data to Nagios graphers like pnp4nagios and NagiosGrapther. I know that I could install tools like Cacti in each of my host, and have a kind of performance data collection redundancy, but I really want avoid that and try to solve all perf analysis requirements with one tools: Nagios

    Read the article

  • HP c6180 driver refuses to install - "must reboot to continue delete files and continue install" loo

    - by Aszurom
    User called me last night, can't get HP drivers to install for printer. 500 meg download for latest c6180 driver. PC says "computer must be restarted to delete some files, then setup will continue." and it won't pass that point. (that's not exact phrasing of the error, and I'm not on the system now) Target machine is XP SP2. I upgraded to SP3. Fully patched. Ran malwarebytes, spybot, hijack this and turned off all startup entries. I noted that windows update also failed on optional printer driver updates for other printers installed on the machine. Went to printer server properties, removed all print drivers from system, deleted all printer ports. Made sure no services were marked disabled. After 5 hours of fighting this I started to get desperate, and uninstalled anything that referenced HP in the machine at all. Still no cure. I'm 6 hours into this and completely stumped. Google returns nothing applicable except unanswered requests for help on same topic.

    Read the article

  • How to plug power/reset buttons from case to motherboard leads?

    - by MaxMackie
    I have a motherboard I salvaged from a pre-assembled computer. Except now I'm trying to use it in my own custom build. The problem is, this motherboard doesn't have any documentation because it was never meant to be used by consumers (as far as I know). I need to plug in my case's power/reset/hdd-light plugs into the motherboard. I usually check the documentation of the board to see which leads go to what connector, but I have no documentation for the board. So, as I see it, I have two options: I find the documentation (I've emailed gateway customer service, but I'm unsure of how successful I'll be with that). I simply test the leads one after the other (can this cause damage if plugged into the wrong leads?) However, there might even be a standard for which leads do what action (I'm not sure about this). For reference, my motherboard's SN/MD (?) is: H57M01G1-1.1-8EKS3H Does anyone have any idea if I can find documentation or find another way to be sure if my connections are correct?

    Read the article

  • Running gdb on Ubuntu 9.10 Apache2 Install

    - by AJ
    Hi all, I am trying to run gdb to debug my Ubuntu 9.10 Apache2 install and having a couple of problems: It seems like the package installed by Ubuntu for Apache2 does not include debugging symbols; is there a different version of the package I should be using for developing/debugging? When I try to run gdb, I get an error that looks to be caused by some missing environment variable. Are there additional options I should pass to "run" to get this to work? Here is the output of the debugger session: root@aj-ubuntu:/usr/sbin# gdb apache2 GNU gdb (GDB) 7.0-ubuntu Copyright (C) 2009 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "x86_64-linux-gnu". For bug reporting instructions, please see: <http://www.gnu.org/software/gdb/bugs/>... Reading symbols from /usr/sbin/apache2...(no debugging symbols found)...done. (gdb) run -X Starting program: /usr/sbin/apache2 -X [Thread debugging using libthread_db enabled] apache2: bad user name ${APACHE_RUN_USER} Program exited with code 01. (gdb) Thanks in advance, -aj

    Read the article

  • Powershell: Execute exe on remote server and capture output

    - by user364825
    I am trying to script the execution of an installer on remote web servers. The installer in question is also a Windows Service that hosts NServiceBus. If RDP'd into the server, the application is installed by the following command: &"$theInstaller" /install /serviceName:TheServiceName The installer prints output about its progress registering the service and connecting to the database to stdout, among other things. This works fine from an RDP session, but when I execute it remotely via PS, I get a you-can't-do-this-over-the-network message if I execute it directly or via Invoke-Command -computername $theRemoteServer: System.IO.FileLoadException: Could not load file or assembly 'file://\\theRemoteServer\c$ \thePath\AutoMapper.dll' or one of its dependencies. Operation is not supported. (Exception from HRESULT: 0x80131515) --- System.NotSupportedException: An attempt was made to load an assembly from a network location which would have caused the assembly to be sandboxed in previous versions of the .NET Framework. This release of the .NET Framework does not enable CAS policy by default, so this load may be dangerous. If this load is not intended to sandbox the assembly, please enable the loadFromRemoteSources switch. See http://go.microsoft.com/fwlink/?LinkId=155569 for more information. (Note: I added an additional "\" to the path in the first line in order to get it to show up correctly in the preview on this site.) This, and other DLLs, are loaded by the service, and the service's execution context cannot, apparently, be remotified. I have also tried using Invoke-WmiMethod, which does something, but it's not clear what, and the output from the installer is lost: Invoke-WMIMethod win32_process create '"$theInstaller" /install /serviceName:TheServiceName' -ComputerName $server (with and without cmd.exe /k before the intaller reference): __GENUS : 2 __CLASS : __PARAMETERS __SUPERCLASS : __DYNASTY : __PARAMETERS __RELPATH : __PROPERTY_COUNT : 2 __DERIVATION : {} __SERVER : __NAMESPACE : __PATH : ProcessId : ReturnValue : 9 How does one remotely execute such an EXE and capture the output? Thanks!

    Read the article

  • What should I do to make sure that IIS does not recycle my application?

    - by AngryHacker
    I have a WCF service app hosted in IIS. On startup, it goes and fetches a really expensive (in terms of time and cpu) resource to use as local cache. Unfortunately, IIS seems to recycle the process on a fairly regular basis. So I am trying to change the settings on the Application Pool to make sure that IIS does not recycle the application. So far, I've change the following: Limit Interval under CPU from 5 to 0. Idle Time-out under Process Model from 20 to 0. Regular Time Interval under Recycling from 1740 to 0. Will this be enough? And I have specific questions about the items I changed: What specifically does Limit Interval setting under CPU mean? Does it mean that if a certain CPU usage is exceeded, the application pool will be recycled? What exactly does "recycled" mean? Is the application completely torn down and started up again? What is the difference between "Worker Process shutdown" and "Application Pool recycling"? The documentation for the Idle Time-out under Process Model talks about shutting down the worker process. While the docs for Regular Time Interval under Recycling talk about application pool recycling. I don't quite grok the difference between the two. I thought the w3wp.exe is the worker process which runs the application pool. Can someone explain the difference to the application between the two? The reason for having IIS7 and IIS7.5 tags is because the app will run in both and hope the answers are the same between the versions. Image for reference:

    Read the article

  • Getting Server 2008 R2 to ignore all traffic from Internet-facing NIC, leaving it to a VM

    - by Wolvenmoon
    I got in to Server 2008 R2 via Dreamspark and would like to start learning on it. I don't have much option but to put it on a system sitting between the Internet and my home LAN due to electricity bills and the fact that 3 computers in an 11x11 space in 102 degree weather is pretty stygian. Currently I use a ClearOS gateway to manage everything, what I'd like to do is take my server 2008 R2 box, which has two NICs, and drop it at the head of my network. I'd want Server 2008 R2 to ignore all traffic on the external facing NIC and pass it to a virtual ClearOS gateway, and to put all its Internet traffic through its other NIC - which will face the rest of my network and be the default gateway for it. The theory is to keep the potentially vulnerable Server 2008 R2 install as tucked behind a Linux box as possible, without sacrificing too much performance. This is a home network that occasionally hosts dedicated game servers and voice chat servers, so most malicious activity is in the form of drive by non-targeted attacks, however, I don't trust Windows Server because I don't know the OS well enough, yet. So, three questions: How do I do this, am I going to be reasonably more secure doing this than if I just let the Server 2008 R2 rig handle all the network traffic and DHCP (not an option), and should I virtualize the Server 2008 R2 rig instead and if so in what? (Core 2 Duo e6600 w/ 5 gigs usable RAM)

    Read the article

  • Nginx all subdomain points to one subdomain (gitlab) rule

    - by Alkimake
    I have installed gitlab on my server and use nginx as http server... I simply used recipe for gitlab on nginx # GITLAB # Maintainer: @randx # App Version: 3.0 upstream gitlab { server unix:/home/gitlab/gitlab/tmp/sockets/gitlab.socket; } server { listen 192.168.250.81:80; # e.g., listen 192.168.1.1:80; server_name gitlab.xxx.com; # e.g., server_name source.example.com; root /home/gitlab/gitlab/public; # individual nginx logs for this gitlab vhost access_log /var/log/nginx/gitlab_access.log; error_log /var/log/nginx/gitlab_error.log; location / { # serve static files from defined root folder;. # @gitlab is a named location for the upstream fallback, see below try_files $uri $uri/index.html $uri.html @gitlab; } # if a file, which is not found in the root folder is requested, # then the proxy pass the request to the upsteam (gitlab unicorn) location @gitlab { proxy_read_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694 proxy_connect_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694 proxy_redirect off; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_pass http://gitlab; } } gitlab.xxx.com works fine and i get gitlab web documents. But if i want another subdomain i use for Jira (jira.xxx.com) on port 80 (i setup jira on 8080 port normally) gets gitlab web site also. How can i restrict this rule only serving for gitlab, or may be i can redirect jira.xxx.com to jira.xxx.com:8080

    Read the article

  • Convert raw IMAP server data into local folders, then upload partial dataset to new IMAP server?

    - by Manca Weeks
    I am transitioning a company with about 30 IMAP accounts, loaded with data (about 77GB total), to a new email host. The majority of the data will be converted into a local archive and distributed to the company computers as a static reference data set. The server side folders the users absolutely cannot do without being on the server will be uploaded back to the new server. I used Mac OS X Mail (Snow Leopard 10.6.6) to download the content. I notice some messages have the name [xxx].partial.emlx, which leads me to believe they have not been downloaded all the way. I have root access to the mail server data and could download the IMAP server data via FTP. I am not sure what utility to use to convert that data to local Mail.app mailboxes. Furthermore, I would appreciate any input on the best way to upload a portion of the data to the new server (GoDaddy), preserving the original dates of the messages. edit OK - forget the raw server data. I found a script that apparently does pretty good archiving IMAP folders to local mbx files. My main quest now is to batch upload a mailbox hierarchy to the new IMAP server without having to start-stop and deal with similar issues. Anyone know of a utility (hopefully for OS X, but if not, I'll fire up my XP virtual system...) that would be capable of this? Thanks, M

    Read the article

  • cygwin sshd times out for remote login

    - by reve_etrange
    I have configured SSHD using Cygwin on Windows 7. I have checked and double-checked all of the following points: Port forwarding is correctly configured Windows Firewall is configured to pass port 22 Local login attempts (using Cygwin SSH) succeed sshd_config has UseDNS No Using nmap from remote machine confirms port 22 is accessible /etc/passwd and /etc/group are correctly populated However, remote login attempts time out. This includes from the local network. user@host:~$ ssh -vvv [email protected] OpenSSH_5.5p1 Debian-4ubuntu6, OpenSSL 0.9.8o 01 Jun 2010 debug1: Reading configuration data /home/user/.ssh/config debug1: Applying options for * debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to the.ip.add.ress [the.ip.add.ress] port 22. debug1: connect to address the.ip.add.ress port 22: Connection timed out ssh: connect to the.ip.add.ress port 22: Connection timed out No messages are logged to /var/log/sshd.log. I suspect that there is a permissions issue with a particular file somewhere, however I have checked the permissions of all my Cygwin binaries, DLLs and the particular files important to Cygwin sshd, including all of: /etc/passwd /etc/group /var /var/log/sshd.log /var/empty Others who have reported this or similar errors appear to have missed one of the points enumerated above. Can anyone point me to a possible solution?

    Read the article

  • Users using Perl script to bypass Squid Proxy

    - by mk22
    The users on our network have been using a perl script to bypass our Squid proxy restrictions. Is there any way we can block this script from working?? #!/usr/bin/perl ######################################################################## # (c) 2008 Indika Bandara Udagedara # [email protected] # http://indikabandara19.blogspot.com # # ---------- # LICENCE # ---------- # This work is protected under GNU GPL # It simply says # " you are hereby granted to do whatever you want with this # except claiming you wrote this." # # # ---------- # README # ---------- # A simple tool to download via http proxies which enforce a download # size limit. Requires curl. # This is NOT a hack. This uses the absolutely legal HTTP/1.1 spec # Tested only for squid-2.6. Only squids will work with this(i think) # Please read the verbose README provided kindly by Rahadian Pratama # if u r on cygwin and think this documentation is not enough :) # # The newest version of pget is available at # http://indikabandara.no-ip.com/~indika/pget # # ---------- # USAGE # ---------- # + Edit below configurations(mainly proxy) # + First run with -i <file> giving a sample file of same type that # you are going to download. Doing this once is enough. # eg. to download '.tar' files first run with # pget -i my.tar ('my.tar' should be a real file) # + Run with # pget -g <URL> # # ######################################################################## ######################################################################## # CONFIGURATIONS - CHANGE THESE FREELY ######################################################################## # *magic* file # pls set absolute path if in cygwin my $_extFile = "./pget.ext" ; # download in chunks of below size my $_chunkSize = 1024*1024; # in Bytes # the proxy that troubles you my $_proxy = "192.168.0.2:3128"; # proxy URL:port my $_proxy_auth = "user:pass"; # proxy user:pass # whereis curl # pls set absolute path if in cygwin my $_curl = "/usr/bin/curl"; ######################################################################## # EDIT BELOW ONLY IF YOU KNOW WHAT YOU ARE DOING ######################################################################## use warnings; my $_version = "0.1.0"; PrintBanner(); if (@ARGV == 0) { PrintHelp(); exit; } PrimaryValidations(); my $val; while(scalar(@ARGV)) { my $arg = shift(@ARGV); if($arg eq '-h') { PrintHelp(); } elsif($arg eq '-i') { $val = shift(@ARGV); if (!defined($val)) { printf("-i option requires a filename\n"); exit; } Init($val); } elsif($arg eq '-g') { $val = shift(@ARGV); if (!defined($val)) { printf("-g option requires a URL\n"); exit; } GetURL($val); } elsif($arg eq '-c') { $val = shift(@ARGV); if (!defined($val)) { printf("-c option requires a URL\n"); exit; } ContinueURL($val); } else { printf ("Unknown option %s\n", $arg); PrintHelp(); } } sub GetURL { my ($URL) = @_; chomp($URL); my $fileName = GetFileName($URL); my %mapExt; my $first; my $readLen; my $ext = GetExt($fileName); ReadMap($_extFile, \%mapExt); if ( exists($mapExt{$ext})) { $first = $mapExt{$ext}; GetFile($URL, $first, $fileName, 0); } else { die "Unknown ext in $fileName. Rerun with -i <fileName>"; } } sub ContinueURL { my ($URL) = @_; chomp($URL); my $fileName = GetFileName($URL); my $fileSize = 0; $fileSize = -s $fileName; printf("Size = %d\n", $fileSize); my $first = -1; if ( $fileSize > 0 ) { $fileSize -= 1; GetFile($URL, $first, $fileName, $fileSize); } else { GetURL($URL); } } sub Init { my ($fileName) = @_; my ($key, $value); my %mapExt; my $ext = GetExt($fileName); if ( $ext eq "") { die "Cannot get ext of \'$fileName\'"; } ReadMap($_extFile, \%mapExt); my $b = GetFirst($fileName); $mapExt{$ext} = $b; WriteMap($_extFile, \%mapExt); print "I handle\n"; while ( ($key, $value) = each(%mapExt) ) { print "\t$key -> $value\n"; } } sub GetExt { my ($name) = @_; my @x = split(/\./, $name); my $ext = ""; if (@x != 1) { $ext = pop @x; } return $ext; } sub ReadMap { my($fileName, $mapRef) = @_; my $f; my @arr; open($f, '<', $fileName) or die "Couldn't open $fileName"; my %map = %{$mapRef}; while (<$f>) { my $line = $_; chomp($line); @arr = split(/[ \t]+/, $line, 2); $mapRef->{ $arr[0]} = $arr[1]; } printf("known ext\n"); while (($key, $value) = each(%$mapRef)) { print("$key, $value\n"); } close($f); } sub WriteMap { my ($fileName, $mapRef) = @_; my $f; my @arr; open($f, '>', $fileName) or die "Couldn't open $fileName"; my ($k, $v); while( ($k, $v) = each(%{$mapRef})) { print $f "$k" . "\t$v\n"; } close($f); } sub PrintHelp { print "usage: -h Print this help -i <filename> Initialize for this filetype -g <URL> Get this URL\n -c <URL> Continue this URL\n" } sub GetFirst { my ($fileName) = @_; my $f; open($f, "<$fileName") or die "Couldn't open $fileName"; my $buffer = ""; my $first = -1; binmode($f); sysread($f, $buffer, 1, 0); close($f); $first = ord($buffer); return $first; } sub GetFirstFromMap { } sub GetFileName { my ($URL) = @_; my @x = split(/\//, $URL); my $fileName = pop @x; return $fileName; } sub GetChunk { my ($URL, $file, $offset, $readLen) = @_; my $end = $offset + $_chunkSize - 1; my $curlCmd = "$_curl -x $_proxy -u $_proxy_auth -r $offset-$end -# \"$URL\""; print "$curlCmd\n"; my $buff = `$curlCmd`; ${$readLen} = syswrite($file, $buff, length($buff)); } sub GetFile { my ($URL, $first, $outFile, $fileSize) = @_; my $readLen = 0; my $start = $fileSize + 1; my $file; open($file, "+>>$outFile") or die "Couldn't open $outFile to write"; if ($fileSize <= 0) { my $uc = pack("C", $first); syswrite ($file, $uc, 1); } do { GetChunk($URL, $file, $start ,\$readLen); $start = $start + $_chunkSize; $fileSize += $readLen; }while ($readLen == $_chunkSize); printf("Downloaded %s(%d bytes).\n", $outFile, $fileSize); close($file); } sub PrintBanner { printf ("pget version %s\n", $_version); printf ("There is absolutely NO WARRANTY for pget.\n"); printf ("Use at your own risk. You have been warned.\n\n"); } sub PrimaryValidations { unless( -e "$_curl") { printf("ERROR:curl is not at %s. Pls install or provide correct path.\n", $_curl); exit; } unless( -e "$_extFile") { printf("extFile is not at %s. Creating one\n", $_extFile); `touch $_extFile`; } if ( $_chunkSize <= 0) { printf ("Invalid chunk size. Using 1Mb as default.\n"); $_chunkSize = 1024*1024; } }

    Read the article

  • Run FTP session from bash script

    - by Adam Salkin
    I'm trying to write a BASH script to test if an FTP site that I own is running. I therefore want the bash script to connect to the FTP site, log in with a dummy account and redirect the output to a file that I can then grep to confirm that the login succeeded. (I know that putting user/pass in a file is not recommended, but this dummy account is chrooted to one empty directory and can't escape to the shell, and in any case I'm the only user who can login to a shell prompt.) I'm using the BASH shell on Ubuntu. I created a file called "ftp-dummy" which looks like this username password And I then did this from the prompt: adam$ ftp my.ftpsite.com < ftp-dummy This does not work - I don't see the normal welcome message and the output is: Password:Name (my.ftpsite.com:adam) : I tried removing the space between the < and the filename - same result. If I redirect the output to a testfile, the testfile shows: Name (my.ftpsite.com:adam): ?Invalid command And I still get a Password prompt on STDOUT I also tried using echo and get the same result: echo -e "username \npassword \n" | ftp my.ftpsite.com I don't see why I'm not seeing the normal welcome message or why the input is not being read from the file. Any help would be much appreciated. Thanks, Adam

    Read the article

  • Ubuntu 11.10 Virtual-box Unity 3D not working

    - by naveen
    After struggling for four hours, I still cannot get Unity 3D of Gnome 3 to work on my VirtualBox - I have been pouring through Internet and forum posts but to no avail. Here's what I've done so far: VirtualBox 4.1.4r74921 on Windows 7 Installed Ubuntu Desktop 11.10 ( 32 bit ) Enabled 3D acceleration Allocated 1.5GB of RAM Allocated 50MB video memory (hope this is not the culprit) Installed Guest edition 4.1.4 Did apt-get update and apt-get upgrade Booted back in to Ubuntu - falls back to Unity 2D Shared folder, mouse integration all works, so guest edition is properly installed Tried the command and below is the output /usr/lib/nux/unity_support_test –p OpenGL vendor string: Mesa Project OpenGL renderer string: Software Rasterizer OpenGL version string: 2.1 Mesa 7.11 Not software rendered: no Not blacklisted: yes GLX fbconfig: yes GLX texture from pixmap: no GL npot or rect textures: yes GL vertex program: yes GL fragment program: yes GL vertex buffer object: yes GL framebuffer object: yes GL version is 1.4+: yes Unity 3D supported: no I am trying to find what the "no" means but cannot find any good answers. Inter Core i5 processor 4GB of RAM on the host Display adapter: NVIDIA GeForce 8400GS Is anyone else facing the same problem? If so, can you point me to a solution or any reference where I can find a solution?

    Read the article

  • Odd IIS FTP Failure

    - by Monkey Boson
    We're running a script on our production box that zips up our database and FTPs it to a backup box every night. Our production box is running Redhat Enterprise 5. Our backup box is running Windows XP Pro / IIS 5.1. Both machines are on the same VLAN (not sure if this is imporatant). The backup file usually clocks in at around 3GB. Every now and again (~5% of the time), the backup script fails. The shell script on the "client side" - which looks at return codes - never identifies any problem since ftp always returns 0. On the "server side", IIS writes out a log that looks like this: #Software: Microsoft Internet Information Services 5.1 #Version: 1.0 #Date: 2009-08-08 07:04:25 #Fields: time c-ip cs-method cs-uri-stem sc-status sc-win32-status 07:04:25 192.168.111.235 [15]USER backup 331 0 07:04:25 192.168.111.235 [15]PASS - 230 0 07:05:54 192.168.111.235 [15]created backup_20090808.zip 426 10035 07:06:16 192.168.111.235 [15]QUIT - 426 0 Now, I know that 426 means "Connection closed, transfer aborted", which is sort-of a catch-all for "IIS was not happy". The real puzzler is the wincode: 10035 (WSAEWOULDBLOCK -- Resource temporarily unavailable). My understanding is that this code is normal when using non-blocking socket calls - which would almost certainly be used by any FTP Server implementation. My first guess that it might be a timeout issue doesn't make sense, since we're only talking about a few minutes here and the timeout was left at the default 900 s. Does anybody have any ideas about what is causing this problem, and how it may be fixed? Thanks!

    Read the article

  • ssh keys rejected each day

    - by EddyR
    I've had OpenSSH server running on my debian server for a couple weeks and all of a sudden now when I go to login the next day it rejects my ssh key and I have to manually add a new one each time. Not only that but I have the "tunneling with clear-text passwords" option enabled and the non-root (login with root is disabled) account for that is rejected too. I'm at a loss why this is happening and I can't find any ssh options that would explain it. --update-- I just changed debug level to DEBUG. But before that I'm seeing a lot of the following in auth.log Feb 1 04:23:01 greenpages CRON[7213]: pam_unix(cron:session): session opened for user root by (uid=0) Feb 1 04:23:01 greenpages CRON[7213]: pam_unix(cron:session): session closed for user root ... Feb 1 04:36:26 greenpages sshd[7217]: reverse mapping checking getaddrinfo for nat-pool-xx-xx-xx-xx.myinternet.net [xx.xx.xx.xx] failed - POSSIBLE BREAK-IN ATTEMPT! ... Feb 1 04:37:31 greenpages sshd[7223]: Did not receive identification string from xx.xx.xx.xx ... My sshd_conf file settings are: # Package generated configuration file # See the sshd(8) manpage for details # What ports, IPs and protocols we listen for Port xxx # Use these options to restrict which interfaces/protocols sshd will bind to #ListenAddress :: #ListenAddress 0.0.0.0 Protocol 2 # HostKeys for protocol version 2 HostKey /etc/ssh/ssh_host_rsa_key HostKey /etc/ssh/ssh_host_dsa_key #Privilege Separation is turned on for security UsePrivilegeSeparation yes # Lifetime and size of ephemeral version 1 server key KeyRegenerationInterval 3600 ServerKeyBits 768 # Logging SyslogFacility AUTH LogLevel DEBUG # Authentication: LoginGraceTime 120 PermitRootLogin no StrictModes yes RSAAuthentication yes PubkeyAuthentication yes #AuthorizedKeysFile %h/.ssh/authorized_keys # Don't read the user's ~/.rhosts and ~/.shosts files IgnoreRhosts yes # For this to work you will also need host keys in /etc/ssh_known_hosts RhostsRSAAuthentication no # similar for protocol version 2 HostbasedAuthentication no # Uncomment if you don't trust ~/.ssh/known_hosts for RhostsRSAAuthentication #IgnoreUserKnownHosts yes # To enable empty passwords, change to yes (NOT RECOMMENDED) PermitEmptyPasswords no # Change to yes to enable challenge-response passwords (beware issues with # some PAM modules and threads) ChallengeResponseAuthentication no # Change to no to disable tunnelled clear text passwords PasswordAuthentication yes # Kerberos options #KerberosAuthentication no #KerberosGetAFSToken no #KerberosOrLocalPasswd yes #KerberosTicketCleanup yes # GSSAPI options #GSSAPIAuthentication no #GSSAPICleanupCredentials yes X11Forwarding no X11DisplayOffset 10 PrintMotd no PrintLastLog yes TCPKeepAlive yes #UseLogin no #MaxStartups 10:30:60 #Banner /etc/issue.net # Allow client to pass locale environment variables AcceptEnv LANG LC_* Subsystem sftp /usr/lib/openssh/sftp-server UsePAM no ClientAliveInterval 60 AllowUsers myuser

    Read the article

  • Why do I get this message from chrome when navigating to https://www.amazon.com?

    - by Denis
    This is probably not the site you are looking for! You attempted to reach www.amazon.com, but instead you actually reached a server identifying itself as *.voxcdn.com. This may be caused by a misconfiguration on the server or by something more serious. An attacker on your network could be trying to get you to visit a fake (and potentially harmful) version of www.amazon.com. Intermittently, I get a blank page when going to http://www.amazon.com. So I stuck an 's' in the URL, making it https://www.amazon.com and got that message above (with the nice red screen) from Chrome indicating there might be some monkey business going on. After hammering on the URL a bunch of times and pulling it up in Chrome's developer tool to look at the network traffic on it, the url (without the s) started behaving. The url with the s just hangs, but the red screen no longer comes up. Some specs... I've got a macBook Pro, Snow Leopard, Time Warner cable. I've had enough strange stuff happening over the past couple months (google.com, youtube.com, amazon.com not coming up or loading strange error messages with random reference numbers) that I finally decided to switch to OpenDNS. Still having problems, though.

    Read the article

< Previous Page | 612 613 614 615 616 617 618 619 620 621 622 623  | Next Page >