Search Results

Search found 26004 results on 1041 pages for 'debian based'.

Page 810/1041 | < Previous Page | 806 807 808 809 810 811 812 813 814 815 816 817  | Next Page >

  • PXE booting LACP hosts on Force10 S50N with FTOS

    - by lolwutreddit
    Hardware: S50N Firmware: FTOS 8.4.2.6 Problem: We're trying to PXE boot some servers that are connected via port-channel interfaces with LACP. Current Work-around: we PXE boot a server with a single interface (eth0), and then use a Perl script to turn up the port-channel interfaces after the server is built. Details: Is anyone doing anything similar on Force10 S50 switches with FTOS? If not, is anyone doing this on another S series, or larger chassis-based Force10? I'm wondering if Native VLAN will solve this, since ports in a port-channel cannot explicitly have a VLAN set, and they don't seem to use the tagged or untagged VLAN that the port channel is in. I will confirm this next (I think it's the only thing I haven't tried) Juniper Example: http://broken.net/openindiana/how-to-pxe-boot-systems-on-lacp-using-juniper-switches/ Cisco: there are plenty of documented ways to solve this issue on IOS and Nexus Update/Edit: since there seems to be no way to use interface or port-channel mode commands to get the individual interfaces to show up in spanning-tree (rtsp in this case), the ports should never go into a forwarding state. I'm not going to mess with it anymore unless a) someone that has experience passes it on, or b) Force10 comes up with a solution for this (I'm guessing it will only be introduced on other S platforms (S55, S60), since the S50 seems to be near EOL). I'm basing that on the fact that the Open Automation type features are only being supported on the newer switches.

    Read the article

  • iPhone Remote with iTunes Library via VPN

    - by sudo work
    Alright, so I'm currently behind a network router (not under my control). The router performs NAT and somehow prevents a computer from scanning other nodes. At least, you're unable, in this instance, to locate an iTunes library. You can, however, communicate with a node's open ports if the local IP address is known, as well as the port. I haven't actually tried port scanning a specific IP using nmap or another tool yet. So I've tried one solution to remove the contribution of the router entirely (to verify that it works without the influence of the routers). I set up an access point using my iPhone and tethered my computer (with the library) to it. From here, I was able to pair my library and the iPhone Remote application. Control of the library was normal as well. This solution is not ideal, however, because I am actively using bandwidth with my computer and cannot afford to be tethered to my 3G connection. A viable solution for me is to use a common VPN connection, which I have set up on a Ubuntu (Intrepid) server that is remote. Both my computer and iPhone are able to access the VPN via PPTP. The server is setup with PPTPD as the VPN-server; I'm using IPTables to perform IP masquerading and forwarding traffic. I however, still cannot connect the library to the phone. I can however, see both devices on the VPN subnet (192.168.0.0/24). SSH'ing and such works fine. What settings on the VPN server must I change to get this to work? Also, how can I assign static IP addresses to various PPTP clients based on MAC addresses?

    Read the article

  • Qmail Patching Makes me Nervous

    - by JM4
    We have a system running CentOS 5 with Plesk 8.6 and Qmail running. Our primary domain is hosted through Media Temple. When Plesk and Qmail are hosted on a single Dedicated Virtual server, it reads the primary server IP and domain and reports that when sending emails from the system. Our pages are written in PHP so we are using the mail() function. While our email goes out to everybody, several enterprise email domains reject our email because it shows a different originating IP (our primary server IP and domain) than the domain we list in the 'from' address. This is not modifiable. Every domain we own of course does have its own IP as well underneath our primary server IP. I have seen several places online that provide a patch, specifically - which allows Domain Binding: "DomainBindings -- For servers that host multiple domains or have multiple IP addresses assigned to them, it is sometimes useful (or important) to have qmail use a specific IP address for its outgoing mail. By default, qmail uses whatever address the OS chooses for all outbound connections. With this patch, you can specify which address to use. It uses a control file similar to smtproutes to specify the outbound IP address to use, based on the sender's domain (local copy) (pyropus.ca)" Qmail Link First off I do not have netqmail installed so I'll need to find another source, but also I am completely unfamiliar with applying patches to qmail. Will I lose email services if I patch? Is it a simple apply and use process? Will my existing email accounts and data be restored after the patch? I am very, very new to unix/linux so this does make me a bit nervous but I am the only person who can make the change and it is one our company "HAS" to have. Any ideas?

    Read the article

  • linked-server sql - access

    - by user22121
    Hi, I have a SQL server 2000 and an Access database mdb connected by Linked server on the other hand I have a program in c # that updates data in a SQL table (Users) based data base access. When running my program returns the following error message: OLE DB provider 'Microsoft.Jet.OLEDB.4.0' reported an error. Authentication failed. [OLE / DB provider returned message: Can not start the application. Missing information file of the working group or is opened exclusively by another user.] OLE DB error trace [OLE / DB Provider 'Microsoft.Jet.OLEDB.4.0' IDBInitialize:: Initialize returned 0x80040E4D: Authentication failed.] . Both the program, the sql server and database access are on a remote server. On the local server the problem was solved by running the following: "sp_addlinkedsrvlogin 'ActSC', 'false', NULL, 'admin', NULL". Try on the remote server the next, without result: "sp_addlinkedsrvlogin 'ActSC', true, null, 'user', 'pass'". On the remote server and from the "Query Analyzer" sql update statements are working correctly. Can you think of what may be the problem? Thanks!

    Read the article

  • Laptop choice for development: MacBook Pro 17 vs Dell Studio XPS 16 vs HP Envy 15

    - by Shalan
    Hey! First things first - let me state that I am not intending to play games on this - I have narrowed down to these 3 purely based on specs and its individual brand reliability in the market. I intend to primarily use: Visual Studio 2008 Pro a lot (develop and deploy on Windows platforms) SQL Server 2005 Oracle 10g Adobe Photoshop CS4 Microsoft Expression Studio Google Sketchup I currently use a desktop PC (Core2Duo 2.66Ghz with 3GB DDRII memory) running Vista Business 32-bit - and I have to admit that, especially for Visual Studio, its quite sluggish to a point where it affects productivity. Furthermore, I intend to only use the notebook on a table - with a cooled surface, like granite :) - so I would appreciate people's input with regard to heat issues. Im aware that the Dell's primary exhaust gets blocked by the lid when open, but some reviews don't seem to place extraordinary emphasis on heat issues resulting from this. My options for the Dell/Alienware: Core i7 720QM 4GB DDRIII memory ATI mobility 3670 (512) 128GB Solid State Drive 16-inch Full HD RGB-LED LCD display (1080p) 3-year next-business-day support My configuration for the Apple MBP: Core2Duo 2.8Ghz (Im assuming the T9600) 4GB DDRIII memory 128GB Solid State Drive standard 1 year support The one advantage I think of with the MBP is that I can have the addition of OSX (though Im unsure what I would use it for, but purely to play around with a much-boasted-about OS) What are your thoughts on this, especially regarding build-quality, heat, performance and battery-life? Much thanks! ~shalan

    Read the article

  • Poor NFS Performance: OpenFiler

    - by Safin09
    Good Day Everyone, I have an issue with OpenFiler, a Linux-based operating that converts a computer system into a SAN/NAS appliance. Here is the problem. In my environment we have two Netapp Storevault 500 appliances that I normally perform backups to a NFS share. There are two backup cronjobs that use ghettoVCB to backup two groups of VM's. One group is a pool of 3 VMs. This takes 13 mins to complete. A second job that backups a pool of 5 VMs to a 2nd Storevault appliance which takes 2 hours. We then installed Openfiler on a old server that has 2 core Xeon processors. There is a software RAID 5 process in place. When performing the same backups to a NFS Openfiler share, the first backup job, which takes 13 mins, takes around 4 hours. The second backup job, which takes 2 hours, takes almost 10 hours to complete. This is unacceptable!!!! Especially considering the strain placed on the host ESX Server. I assumed that because of the software RAID 5, the overhead on the CPU explained the long backup times. I then installed Openfiler on a 2nd server, an IBM x306 machine which has a P4 Intel processor. This time no software RAID or any RAID at all. A single 750GB hard drive that contained the OS and the rest of the disk uses to backup VMs to a NFS share. I performed the first backup job of the pool of 3 VMs. This time the backup job took 1 and 1/2 hours to complete instead of 13 mins!!!!!!!!!! Is Openfiler simply poor at being an NFS Server!!!!!!!!!!!!! Has anyone else had these issues with Openfiler?

    Read the article

  • How could Google Latitude find my exact PC location with no GPS or public wifi?

    - by Mike
    I found a similar question here but I still don't get it. You see, I live in a small town and every time I check my IP location via online services or speed test websites, my location appears to be my ISP server location (which in my case is 250 miles away). But when I tried Google latitude, it pinpointed my exact location within less than 100 meters! I use Windows Vista, Google Chrome, and when I got the message that "Google is trying to locate you", I agreed just to check what the result will be. It was scary, very scary! What I've come up after reading the above link is that Google have a kind of extensive WiFi database locations. That could be understandable with the case of public and open WiFis that are used with a lot of people. Some of them might be using applications that could gather location data and somehow this information ends up in giant Google databases. From those, Google could pinpoint a WiFi location based on its MAC address along with these bits of info that have been gathered via various sources. The issue here is that my WiFi is private, I don't even broadcast my WiFi name. So how on earth did Google find my exact PC location? Please break down the answer in layman's terms as possible.

    Read the article

  • Cannot run logwatch due to Date::Manip issue

    - by Quintin Par
    I tried to run logwatch at follows [root@machine cron.daily]# ./0logwatch ERROR: Date::Manip unable to determine TimeZone. Execute the following command in a shell prompt: perldoc Date::Manip The section titled TIMEZONES describes valid TimeZones and where they can be defined. My date is as follows root@machine cron.daily]# date Thu Aug 23 06:25:21 GMT 2012 Now based on details in various forums I tried to fix this by setting /etc/timezone to “+0800” but it didn’t work My /etc/localtime points to /usr/share/zoneinfo/GMT and is managed by puppet How do I go about fixing this? I still want all my machines to be in GMT timezone. EDIT: Sadly, Both the changes are not working: [root@machine cron.daily]# cat /etc/TIMEZONE UTC Quanta’s [root@machine cron.daily]# cat ~/.bash_profile # .bash_profile # Get the aliases and functions if [ -f ~/.bashrc ]; then . ~/.bashrc fi # User specific environment and startup programs PATH=$PATH:$HOME/bin export TZ=GMT export PATH [root@machine cron.daily]# source ~/.bash_profile [root@machine cron.daily]# ./0logwatch ERROR: Date::Manip unable to determine TimeZone. Execute the following command in a shell prompt: perldoc Date::Manip The section titled TIMEZONES describes valid TimeZones and where they can be defined.

    Read the article

  • How to render remote assistance to a person using Live Messenger?

    - by Cheeso
    There is a feature within Windows Live Messenger v9 that allows a person to ask for remote assistance. BBut as I understand it, this works only if the router is UPnP enabled on both ends. Today I tried this with a friend during an active chat session, and nothing happened. I suspect a router problem. as I am remote, I cannot configure the router for them. What's a good way to render remote assistance? Here's the scenario: it will be based on invitation only (it's not a remote desktop or "logmein" situation). It's a younger person, a computer novice, on the other end of the wire. I'll be assiting with their use of applications on the PC. I'd l ike to be able to SEE the screen, and also use the mouse and keyboard. I have used Ultra-Vnc on the target machine and vncviewer on my machine, on a LAN. It works well. But I don't think I can use that, because it's my kids' computer in my ex-wife's place, and I don't want her to accuse me of spying on her computer. That's why I need it to be invitation only. Advice please. Is there an easy way for me to set up Remote Assistance? IS there some other tool I can use?

    Read the article

  • Force database read to master if slave data is stale

    - by Jeff Storey
    I previously asked a specific question about this database replication for new user signup to which I got an answer, but I want to ask this in the more general sense. I have a database setup in which I am using a master/slave combination. I am using the slaves for load balancing (the data itself is partitioned/sharded across multiple databases, but each database has X slaves for load balancing). Let's say I write some data to the master. Now I do a subsequent read which hits a slave, but the slave has not yet caught up to the master. Is there a way (which can be done quickly since it will happen frequently) to determine if the data is stale in the slave so I can then route to the master? In my previous question, it was suggested to do simultaneous writes to the cache and the database. This solution seems practical, but there is still a chance that the data may have been removed from the cache but not yet updated in the slave. A possible solution is to ensure the cache is big enough (based on the typical application load) so the data will not be evicted within the time frame it takes to replicate the data. This seems like it may be feasible. Can anyone provide additional insight into this question? Thanks!

    Read the article

  • Permission denied when running Rails app in VirtualBox Ubuntu guest with files on Windows host

    - by Ola Tuvesson
    I think I'm close to having my dev environment set up exactly the way I want, but one final snag remains. I'm running VirtualBox on a Windows 7 64bit host, with my dev enviroment inside a Ubuntu 12.04 guest. I want to keep the files for my projects on the host filesystem - partly so I can access them when the Ubuntu guest is not running, but also so I can use Tortoise and other Windows based tools (cough Photoshop), and it also eases my backup scheme somewhat. So I've got a folder "Rails" on my NTFS drive, which I've shared (Samba) from the host with a user specifically created for the Ubuntu guest. The mount point has been set up and an entry added to fstab (cifs), using a credentials file and the options iocharset=utf8,mode=0777,dir_mode=07??77 This mounts fine and my Ubuntu user has both read and write permissions to the contents. But when I try to start my Rails app I get permission errors on any files the app needs to write to (e.g. the log file) - why is that? Are there any major conceptual flaws with this approach? Would I be better off using the VBox "shared folders" function?

    Read the article

  • AWS lighttpd: Sending a copy of requests to test.

    - by Martin
    I have a load balanced service on AWS. So the ELB evenly distributes the load across my servers. Each server is running lighttpd that does logging and forwards the requests to my service (on the same machine). I have written a new version of the service. It is installed and running on an EC2 machine test1 (basically a mirror of our current server but the new service running instead of the original) and I have done some preliminary tests that look good. But what I would like to do is mirror a fraction of incoming traffic to the new version of the service so I can do some comparisons between an original version and the new version based on real traffic. Thus I was thinking I could modify one box behind the ELB to duplicate its traffic to the test1. I was thinking I could modify the configuration of lighttpd so that each request is mirrored/duplicated. i.e. the original service keeps responding as before but a mirror request is sent to test1 but the reply is just dropped). Unfortunately I have not been able to work this out. Any ideas on how I could mirror the requests from one box to itself and test1. Or any other ideas for testing.

    Read the article

  • remote telnet and email

    - by Mustafa Ismail Mustafa
    This issue has been occupying my work for the last few days and I will be understating when I say its driven me up the blasted walls. Essentially, I can ping and tracert the domain jnrcs.org and the subdomains mail.jnrcs.org and mail.jordanredcrescent.org. All three mentioned point to ip address 212.38.147.97. About 4 days ago, when we registered the domain "jnrcs.org" suddenly all external connection to the mail server from outside was lost. Not just mail, but other http based port-forwarded or natted services (such as camera surveillance and pbx services). I tried good old telnet (I'm a linux user) and I get the following output: telnet> o mail.jnrcs.org 25 Trying 212.38.147.97... telnet: Unable to connect to remote host: No route to host telnet> Tracert gives me: traceroute to mail.jnrcs.org (212.38.147.97), 30 hops max, 60 byte packets 1 192.168.1.2 (192.168.1.2) 0.869 ms 0.944 ms * 2 * * * 3 * * * 4 * * * 5 * * * 6 * 212.38.128.118 (212.38.128.118) 33.875 ms 39.187 ms 7 * * * 8 * * * 9 * * * 10 * * * 11 * * 212.38.147.97 (212.38.147.97) 67.621 ms I am stumped. Other friends from all around the world can telnet no problem. What could have possibly happened to make telnet/smtp/pop/imap/http access stop? Please bear in mind I'm primarily a developer but I [am under the delusion] that I can carry my weight in IT administration :) TIA

    Read the article

  • JAVASCRIPT ENABLED

    - by kirchoffs415
    HI, I hope somebody can help, i keep getting the following message when i log on-- Your Javascript is disabled. Limited functionality is available. it will stay for maybe a day sometimes two.I have uninstalled javascript and reinstalled but still the same. Iam using chrome. any help would be gratefull many thanks Dominic p.s. my system spec is as follows System InformationOS Name Microsoft® Windows Vista™ Home Premium Version 6.0.6002 Service Pack 2 Build 6002 Other OS Description Not Available OS Manufacturer Microsoft Corporation System Name DOM-PC System Manufacturer Dell Inc. System Model Inspiron 1545 System Type X86-based PC Processor Pentium(R) Dual-Core CPU T4200 @ 2.00GHz, 2000 Mhz, 2 Core(s), 2 Logical Processor(s) BIOS Version/Date Dell Inc. A05, 25/02/2009 SMBIOS Version 2.4 Windows Directory C:\Windows System Directory C:\Windows\system32 Boot Device \Device\HarddiskVolume3 Locale United Kingdom Hardware Abstraction Layer Version = "6.0.6002.18005" User Name DOM-PC\DOM Time Zone GMT Standard Time Installed Physical Memory (RAM) 3.00 GB Total Physical Memory 2.96 GB Available Physical Memory 1.38 GB Total Virtual Memory 5.89 GB Available Virtual Memory 4.25 GB Page File Space 3.00 GB Page File C:\pagefile.sys My System Specs

    Read the article

  • Alternatives to Splunk?

    - by MichaelGG
    I'm pretty impressed with Splunk, especially version 4. Pretty graphs, alerting (Enterprise only), and fast, accurate, searching. It's a great product. However, the cost just way too high to consider for full production use for our company. All we really need is to be able to index different logs in a central place, and have reasonable searching on that. Having alerts based on a saved search is also really nice. We don't really go beyond that. In fact, our biggest usage has been in deploying new applications. Everything gets logged via log4net to either the Event log on Windows or a text file on Linux. Splunk makes it pretty easy to quickly search across those to make sure all the parts of the app are working ok -- that's saved us tons of time versus hunting down individual logging sources. What alternatives exist in this market? I have a sinking feeling Splunk's pricing is so high because they have the best product by far, and they know it. We want the server to run on Windows. I'd be open to a split model, using one product for general logs (collect via syslog/Snare), and a dedicated product for our custom apps (like Log4Net Dashboard). Would using a simple syslog server such as Kiwi, sent to SQL Server (perhaps with fulltext enabled) work? I'd hope the cost should be well under 5 figures, USD. (And yes, I know, we're cheap. We're a startup with little money, and BizSpark takes care of all our MS licensing.) Edit: I should add, we have about 10 physical servers, 20 VMs, and a couple firewalls and switches. 90% is Windows.

    Read the article

  • Cannot upload files bigger than 8GB to Amazon S3 by multi-part upload due to broken pipe

    - by spencerho
    I implemented S3 multi-part upload, both high level and low level version, based on the sample code from http://docs.amazonwebservices.com/AmazonS3/latest/dev/index.html?HLuploadFileJava.html and http://docs.amazonwebservices.com/AmazonS3/latest/dev/index.html?llJavaUploadFile.html When I uploaded files of size less than 4 GB, the upload processes completed without any problem. When I uploaded a file of size 13 GB, the code started to show IO exception, broken pipes. After retries, it still failed. Here is the way to repeat the scenario. Take 1.1.7.1 release, create a new bucket in US standard region create a large EC2 instance as the client to upload file create a file of 13GB in size on the EC2 instance. run the sample code on either one of the high-level or low-level API S3 documentation pages from the EC2 instance test either one of the three part size: default part size (5 MB) or set the part size to 100,000,000 or 200,000,000 bytes. So far the problem shows up consistently. I attached here a tcpdump file for you to compare. In there, the host on the S3 side kept resetting the socket.

    Read the article

  • SOGo installation on Mail Server

    - by i.h4d35
    We run a normal mail server on cPanel for web-based email. We've just got a request to add Calendar, address book, tasks functions; mobile capabilities (I'm guessing acces via a mobile client/app); public folders etc. On the client-side, we have some people using webmail, some use MS Outlook and some others use Mozilla Thunderbird. Having looked around, I zeroed in on SOGo, Citadel and kolab as options for this. I read through SOGo's official install guide and also checked here and here. However, I see most of the HowTo's ask installation of MySQL/PgSQL, LDAP, Samba etc. While I can manage installation of Samba (if required), I have no idea if installing LDAP, MySQL etc is really required. Also, any guidance as to how to install on a regular mail server would be appreciated. Sorry if this sounds vague. If any more information is required, I'll be happy to give it. Thanks in advance. Edit: This server in question has always been governed via cPanel (to install PHP, MySQL, configure DNS etc). So I am confused if really need LDAP.

    Read the article

  • Sharing large (multi-Gb) files with clients

    - by Tim Long
    I wasn't sure if this was the best place for this question, but I think it is squarely in the realm of the IT admin so that's the reason I put it here. We need to share large files (several Gigabytes) with external clients. We need a simple way of reliably and automatically publishing these files so that clients can then download them. Our organization has Windows desktops and a Windows SBS 2011 server. Sharing from our server is probably suboptimal from the client's perspective, because of the low upstream bandwidth of typical ADSL (around 1 Mbps) - it would take all day (9 hours for a 4Gb file) for the client to download the file. Uploading to a 3rd party sever is good for the client but painful for us, because we then have to deal with a multi-hour upload. Uploading to a third-part server would be less problematic if it could be made reliable and automatic, e.g. something like a Groove/SharePoint Workspace, simply drop the file in and wait for it to synchronize - but Groove has a 2Gb limit which is not big enough. So ideally I'd like a service with the following attributes: Must work for files of at least 5Gb, preferably 10Gb Once the transfer is started, it must be reliable (i.e. not sensitive to disconnections and service outages) and completely automatic Ideally, the sender would get a notification when the transfer completes. Has to work with Windows based systems. Any suggestions?

    Read the article

  • INFORMIX - listener thread err 25582

    - by Samuel Lao
    I´ve been digging different forums in the last 7 days looking for a possible solution.... Our database is based on informix running in a Linux server (LINUX SUSE 11). Suddenly, last saturday informix began to show an error message: listener-thread err=-25582 oserr=0, network connection is broken End users started to call reporting about slow network performance to this server, moments where the database application lost connection with server...so we proceeded doing a ping to the db server, getting good responses (1ms) without losing packets. I tried typing telnet (ipserver) 1526 which is informix's port for the application, it works. We had to disconnect the server and enable a backup db server which is located on another branch. It has been working in a regular way because the backup server hasn´t good specs (it is an old dell server model). So, I scanned the main server looking for viruses using Trend Micro Server Protect, it didn´t find anything (0 viruses and spywares). I revised the switches and routers, but I haven´t find anything strange... What else could be ? Thanks in advanced for your time and help with this issue.....I would really appreciate any advice...

    Read the article

  • Apache2, making my site publicly available

    - by Shackler
    Hello, I want to make my apache 2 development server public to the internet, it is a Django based website. Here is my apache2 config: <VirtualHost *:80> Alias /media /home/user/myproject/statics Alias /admin_media /home/myuser/django/Django-1.1.1/django/contrib/admin/media WSGIScriptAlias / /home/myuser/myproject/myproject_wsgi.py WSGIDaemonProcess myproject user=myuser group=myuser threads=25 WSGIProcessGroup myproject </VirtualHost> When I do netstat -lntup I get: Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN - tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN - tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN - tcp6 0 0 :::80 :::* LISTEN - tcp6 0 0 :::22 :::* LISTEN - tcp6 0 0 ::1:631 :::* LISTEN - udp 0 0 0.0.0.0:5353 0.0.0.0:* - udp 0 0 0.0.0.0:38582 0.0.0.0:* I connect with ADSL thus I am behind a router. For this I have made my computer DMZ enabled to my machine. What can be the problem? When I try to login with my ip, I get my routers config page, when a friend tries to connect to me from internet, he gets "not authorized".

    Read the article

  • Ubuntu 12.04 - Pound Reverse Proxy and Adobe Flex/Flash Auth

    - by James
    First time posting, I have a completely fresh install of ubuntu 12.04 Client as a reverse proxy gateway to our internal network. Our setup is we have one external ip but three domains we would like to point to various webservers on our internal network. It's not so much a load balancing issue or cacheing etc. Merely routing some Client browsers to a port 80 webpage (to adhere to some stricter corporate policies regarding placing port numbers after domain names). I have gone with pound and everything seems to be working fine. Static pages load etc. Everything is good with the exception of a Flash/Flex based WebClient for a Digital Asset Management program. The actual static page loads fine, it is just at the moment of entering credentials, be they correct or incorrect, and hitting login, there is no response whatsoever. Either a rejection or confirmation etc. So the request back to the internal server can't be getting through. I have googled extensively and there might be a solution in a crossdomain.xml file? Documentation isn't very clear. And we are not the authors of the DAM app, and have no control over the code on the Flash/Flex side. Questions: Is there a particular config file/solution for pound that allows Flash/Flex auth information to be forwarded? Is there another reverse proxy program (nginx?)that allows this type of config? Am I looking at this the entire wrong way, should Flash/Flex fundamentally not be allowed to have this access?

    Read the article

  • Problems with OpenVPN setup

    - by user70617
    Hi, I'm trying to set up a VPN server using OpenVPN and I'm getting some errors while trying to connect the client to the server. I'm getting the following error: Sun Feb 13 14:54:16 2011 OpenVPN 2.1.4 i686-pc-linux-gnu [SSL] [LZO2] [EPOLL] built on Feb 5 2011 Sun Feb 13 14:54:16 2011 IMPORTANT: OpenVPN's default port number is now 1194, based on an official port number assignment by IANA. OpenVPN 2.0-beta16 and earlier used 5000 as the default port. Sun Feb 13 14:54:16 2011 NOTE: OpenVPN 2.1 requires '--script-security 2' or higher to call user-defined scripts or executables Sun Feb 13 14:54:16 2011 ******* WARNING *******: all encryption and authentication features disabled -- all data will be tunnelled as cleartext Sun Feb 13 14:54:16 2011 RESOLVE: NOTE: localhost resolves to 2 addresses Sun Feb 13 14:54:16 2011 Note: Cannot ioctl TUNSETIFF tap0: Device or resource busy (errno=16) Sun Feb 13 14:54:16 2011 Note: Attempting fallback to kernel 2.2 TUN/TAP interface Sun Feb 13 14:54:16 2011 Cannot open TUN/TAP dev /dev/tap0: No such file or directory (errno=2) Sun Feb 13 14:54:16 2011 Exiting I have bridge-utils installed and tap0 shows up in ifconfig. Can anybody give me a hand? Thanks in advance.

    Read the article

  • how to cause linux system datetime to run faster than real world datetime?

    - by JamesThomasMoon1979
    Background I want to monitor a running linux system over several days. It's a custom gentoo build and with much custom software on board. This software has ongoing maintenance timers and cron scripts and other clock driven events. I need to verify these scheduled events are working. Problem Waiting for the system to step through daily and weekly activity is a long wait time. And modifying all clock-based timers on the system would be time consuming. Yet, I often want to test a system's end-to-end scheduled activities without waiting a week. Potential Solution Have the linux system under test appear to run through it's daily cycle of activity within just a few hours. My Question for Serverfault Is there a way to cause the system's time to run faster than real world time? My first thought is manipulating the ntp daemon to repeatedly and smoothly increment the clock . Any other ideas? And yes, I know this may have strange side affects. However, the system has no important or time critical interactions with systems outside of itself. And this may be a valuable testing technique.

    Read the article

  • Maximum RAM on Biostar P4M8PM7 Socket775 mATX board

    - by Alex Balashov
    I have a server with a Biostar P4M8PM7 ("Pro-M7") board based on a VIA chipset. It's a strange board to put in a server because it seems like more of a desktop board to me, but alas! It takes DDR2-667 (PC5300) RAM. What I can't figure out is the maximum amount I can put in it, as I cannot find the manual anywhere online. I've found a few marketing broadsheets from online retailers that say, "up to 2 GB of RAM!" but I am not sure whether to believe them. They also do not seem to be quite for the same board, as they indicate DDR2 400/533 RAM, for example: http://www.geeks.com/details.asp?invtid=P4M8P-M7. The manufacturer's web site says the same thing, but does not elaborate. It's a 64-bit CPU and board; is there a technical reason why the board would not be able to address more than 2 GB? Can someone tell me what sort of that reason that would be? I bought this server from someone really hoping I could put 8 to 16 in it, and wanted to do some research before I gave up. On a related note, it's not indicated anywhere whether it can take ECC RAM; the existing chips are not ECC, but most memory sold in the range I'm looking for (e.g. DIMMs with enough chip density to do 8 GB) seems to be server-class and for that reason ECC. Any ideas? Thank you very much for your consideration in advance!

    Read the article

  • Backing up SQL NetApp Snapshots using TSM

    - by WerkkreW
    In our environment we have a 3 node SQL 2005 Cluster which is on NetApp storage. We are currently using SMSQL (NetApp SnapManager for SQL) to take Snapshot backups of the data. This works great, but due to some audit requirements we are also forced to maintain some copies on tape. We have used NDMP in other places across the enterprise but we do not want to use it in this specific instance. Basically what I need to do is, get the most recent snapshot copy of the databases on tape, via Tivoli Storage Manager (TSM). What I have done is, obtained a basic Windows Server 2003 VM with SnapDrive installed, which is SAN attached and zoned to the NetApp, and I have written a batch file to do the following: Mount the latest __RECENT snapshot lun to the host, using a specific drive letter Perform a TSM based incremental backup Dis-mount the LUN This seems to work fine, except sometimes the LUN's do not mount due to some sort of timeout. Also, due to my limited knowledge of windows batch scripting, I have no way to monitor the success or failure of these backups since I do not know how to send a valid return code back to the TSM scheduling service. Is there a more efficient/elegant way to accomplish this without NDMP?

    Read the article

< Previous Page | 806 807 808 809 810 811 812 813 814 815 816 817  | Next Page >