Search Results

Search found 20761 results on 831 pages for 'chef client'.

Page 553/831 | < Previous Page | 549 550 551 552 553 554 555 556 557 558 559 560  | Next Page >

  • IIS7 response size thresholds

    - by DanielM
    I have a customer who is attempting video playback via HTTP progressive download of very large files ( 1 GB). There is no problem once a file is cached at the edge via my CDN, but hits to my origin (first hits prior to edge-cache population) experience stalling and loss of sync between audio and video about an hour and a half into playback. This occurs pretty reliably at that point, suggesting that some threshold somehwere is getting hit. Are there IIS configuration knobs governing HTTP Response size? Other data points: I am unable to replicate this problem. I am looking at client bandwidth and last mile issues. I am looking at possible encoding recipe dependencies. But this problem never came up when we were using a "push" cache configuration (CDN-hosted origin), so something funky serverside at my origin seems like a likely culprit. Thanks ...

    Read the article

  • Can a device (WAP or switch) be configured as an 802.1x supplicant?

    - by Allan Ross
    We are looking at implementing 802.1x on a wired/wireless network. What I am looking for is a device that can act as a supplicant and once authenticated on the network, is able to pass traffic from any downstream connected device. The point of doing this would be to allow a properly pre-configured device to be provided to a client user who could then connect any device on the downstream side of the device. We will be able to manage the aggregate traffic on the device without concern for what is connected on the far side. Am I dreaming; does every device out there support this and I just don't know it or is reality fall somewhere in the middle?

    Read the article

  • MySQL proxy HA with no need to reconnect after node failure

    - by Matthias
    I use MySQL with Galera wsrep to get synchronous replication, that part it's up and running I need to setup a kind of proxy to handle client connections. Since any node in cluster can fail, clients will not connect nodes directly, but only via proxy. Currently I use Galera Load Balancer which does it work, but with one exception: if one node fails, all clients connected via proxy to that node get connection error and need to reconnect. I have no control over server applications connected to proxy and some of them can't reconnect automatically and need manual restart. So the question is how to force proxy automatically redirect already connected applications to new data node, without need to reconnect?

    Read the article

  • Reliable Backup Solution for Linux for Complete System Restoration

    - by Chris S
    What's the best backup solution for Linux that can completely restore the entire filesystem to a blank harddrive (including partitioning) after an old harddrive dies? I'm currently running a few Ubuntu machines, some with RAID-1 and others without RAID (mostly laptops). I'd like to implement a backup solution that can take incremental snapshots of the entire filesystem, so that if I were to replace all the harddrives in a machine, I could use the backup to restore a perfect copy of the previous filesystem. Unfortunately, nearly all the backup solutions I've found seem to be glorified rsync scripts, which only backup some files, and have no easy way to restore once the entire filesystem is gone. Some of the more complicated solutions, like Bacula, might do what I need, but require a complicated server/client setup and are notoriously difficult to maintain. I've heard that Apple's TimeMachine utility has this ability, and I've had similar success taking differential disk images with Acronis True Image on Windows, but of course neither of these work on Linux. Is there anything comparable for Ubuntu?

    Read the article

  • Set up Windows SBS dns server and vpn clients from brench office

    - by mn
    I have got some clients from bench office which connects vpn to main office. The Router from bench office assigned addresses from DHCP 192.168.1.0/255.255.255.0 and remote gateway assigned vpn ip addresses 10.10.20.0/255.255.255.0. There is a DNS server (Active Directory Win SBS 2000) and vpn client are registered with vpn address (10.10.20.0/255.255.255.0 and domain company.com.pl). I would like to register also primary bench subnet 192.168.1.0/255.255.255.0 with domain for example company.vpn.local I want to access vpn hosts for example: dev3.copmany.pkb.local and dev3.company.com using my Win SBS 2000 DNS server.

    Read the article

  • Video Hosting External or Internal?

    - by user69334
    I have a client who wants to offer videos for downloading or streaming on his website. Now we have hosting for him, which is wonderful: it's reliable and fast, offers unlimited space and databases BUT it only offers 10GB in bandwidth. The videos could easily be placed on his server (unlimited space) but the bandwidth is a real problem. Now I am wondering if he would purchase external hosting for his videos, and people would want to download them, woould this still eat up all his available bandwidth in no time, because the download request would go via his site, or is there a way to circumvent this?

    Read the article

  • How do I trust an off site application

    - by Pieter
    I need to implement something similar to a license server. This will have to be installed off site at the customers' location and needs to communicate with other applications at the customers' site (the applications that use the licenses) and an application running in our hosting center (for reporting and getting license information). My question is how to set this up in a way I can trust that: The license server is really our application and not something that just simulates it; and There is no "man in the middle" (i.e. a proxy or something that alters the traffic). The first thing I thought of was to use with client certificates and that would solve at least 2. However, what I'm worried about is that someone just decompiles (this is build in .NET) the license server, alters some logic and recompiles it. This would be hard to detect from both connecting applications. This doesn't have to be absolutely secure since we have a limited number of customers whom we have a trust relationship with. However, I do want to make it more difficult than a simple decompile/recompile of the license server. I primarily want to protect against an employee or nephew of the boss trying to be smart.

    Read the article

  • Squid "system returned (13) Permission denied"

    - by AndyM
    I can get to a site form my Squid server directly using lynx http://my-URL , ie not using squid as the proxy, just to prove the connectivity exists. Lynx connects fine to the site - its a Weblogic portal When I try the same site from client with the squid machine as a proxy I get a squid error indicating that the destination site refused the connection from Squid. The squid server is a RHEL5.5 server. The error is something like The following error was encountered: Connection Failed The systen returned: (13) Permission denied Any ideas ? The squid access.log just indicates a TCP_MISS. Its as if the destinatin site knows its been accessed by squid and is not allowing ?

    Read the article

  • Approach for developing software that will need to be ported to multiple mobile platforms in the future

    - by Jonathan Henson
    I am currently doing the preliminary design for a new product my company will be pushing out soon. We will start on Android, but then we will need to quickly develop the IPhone, IPad.... and then the Windows 8 ports of the application. Basically the only code that wouldn't be reusable is the view specific code and the multimedia functions. This will be an SIP client (of course the user of the app will not know this) with several bells and whistles for our own business specific purposes. My initial thought is to develop all of the engine in C and provide a nice API for the C library since JAVA and .NET will allow native invoking, and I should just be able to directly link to the C lib in objective-C. Is there a better approach for vast code reuse which also remains close to the native platform? I.e. I don't want dependencies such as Mono-droid and the like or complicated interpreter/translator schemes. I don't mind re-coding the view(s) for each platform, but I do not want to have multiple versions of the main engine. Also, if I want to have some good abstraction mechanisms (like I would in say, C++) is this possible? I have read that C++ is not allowed for the IPad and Iphone devices. I would love to handle the media decoding in the C library, but I assume that this will be platform dependent so that probably will not be an option either. Any ideas here?

    Read the article

  • javaws crashes, error in ld-linux-x86-64.so.2

    - by user54214
    I am running Ubuntu 11.10 64 bit client as Dom0 and Xen. I am having problems getting java up and running. Java itself seems to work fine, however I get strange errors, for example when I start javaws. I tried different versions and always get the same errors. I tried openjdk 1.6 and 1.7 as well as sunjava6 and 7. I alway get an error in the same lib All other applications are working fine, so it seems ld-linux-x86-64.so.2 is working fine. Any hints what could be wrong? Ubuntu01:~$ javaws # # A fatal error has been detected by the Java Runtime Environment: # # SIGILL (0x4) at pc=0x00007f4e74c5ad10, pid=7974, tid=139974945277696 # # JRE version: 6.0_23-b23 # Java VM: OpenJDK 64-Bit Server VM (20.0-b11 mixed mode linux-amd64 compressedoops) # Derivative: IcedTea6 1.11pre # Distribution: Ubuntu 11.10, package 6b23~pre11-0ubuntu1.11.10.2 # Problematic frame: # C [ld-linux-x86-64.so.2+0x14d10] _dl_make_stack_executable+0x2b70 # # An error report file with more information is saved as: # /home/r/hs_err_pid7974.log # # If you would like to submit a bug report, please include # instructions how to reproduce the bug and visit: # https://bugs.launchpad.net/ubuntu/+source/openjdk-6/ # The crash happened outside the Java Virtual Machine in native code. # See problematic frame for where to report the bug. # Aborted

    Read the article

  • Can not login Dashboard / Unable to find the server at mykeystoneurl

    - by neo0
    I installed Dashboard following this guide: http://wiki.openstack.org/OpenStackDashboard Everything fine, but when I run the server, I can not login with the username and password in DATABASE config in local_settings.py. Here's my config: DATABASES = { 'default': { 'ENGINE': 'django.db.backends.mysql', 'NAME': 'dashboarddb', 'USER': 'nova', 'PASSWORD': 'nova', 'HOST': 'localhost', 'default-character-set': 'utf8' }, } When I run the Dashboard server and enter username + password. It returned this error on browser: Unable to find the server at mykeystoneurl (HTTP 400) And in the command line: DEBUG:openstack_dashboard.settings:Running in debug mode without debug_toolbar. DEBUG:openstack_dashboard.settings:Running in debug mode without debug_toolbar. Validating models... 0 errors found Django version 1.3.1, using settings 'openstack_dashboard.settings' Development server is running at http://0.0.0.0:8888/ Quit the server with CONTROL-C. Request returned failure status. Traceback (most recent call last): File "/home/us/horizon/.venv/src/python-keystoneclient/keystoneclient/client.py", line 121, in request body = json.loads(body) File "/usr/lib/python2.7/json/__init__.py", line 326, in loads return _default_decoder.decode(s) File "/usr/lib/python2.7/json/decoder.py", line 366, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "/usr/lib/python2.7/json/decoder.py", line 384, in raw_decode raise ValueError("No JSON object could be decoded") ValueError: No JSON object could be decoded [06/Mar/2012 15:20:03] "POST /auth/login/ HTTP/1.1" 200 3735 I also tried login as "admin" with password is "password" or "secrete" but I didn't work. What's wrong? Thank you!

    Read the article

  • How to handle Real Time Data from a database perspective?

    - by balexandre
    I have an idea in mind, but it still confuses me the database area. Imagine that I want to show real time data, and using one of the latest browser technologies (web sockets - even using older browsers) it is very easy to show to all observables (user browser) what everyone is doing. Remy Sharp has an example about the simplicity about this. But I still don't get the database part, how would I feed, let's imagine (using Remy game Tron) that I want to save the path for each connected user in a database and if a client wants to see what is going on with a 5 sec delay, he will see that, not only the 5 sec until that moment but the continuation in time ... how can I query a DB like that? SELECT x, y FROM run WHERE time >= DATEADD(second, -5, rundate); is not the recommended path right? and pulling this x in x time ... this is not real data feed correct? If can someone help me understand the Database point of view, I would greatly appreciate.

    Read the article

  • Large temp files created in Windows Server 2003 temp folder

    - by BlueGene
    I'm managing a Windows Server 2003 with around 30 GB space in primary partition. A couple of times the server has crashed with error message saying that the C: drive is full. After searching folders to free up space, I found that lot of temp files being created in C:\WINNT\Temp and some of them of enormous size with more than 2GB. The temp files have common name, Efs###.tmp. Since we encrypt files frequently using Windows's EFS, I initially suspected Windows encryption. But after reading the documentation, I found that Efs###.tmp are in fact created by EFS but they are created only under the folder which you're currently encrypting, not in Temp folder. This looks very strange since Efs##.tmp files shouldn't be created under C:\WINNT\Temp unless someone tried to encrypt that Temp folder itself. The server has Tivoli Backup client. Could that be messing with windows Encryption? Can anyone shed some light on what could be causing the issue?

    Read the article

  • NFS mount mounted inside another NFS mount disappears randomly

    - by espenfjo
    I have quite an odd issue where my nested NFS mounts just disappear randomly from time to time. The fstab entries look somewhat like this: nfs:/home /home/nfs rw,hard,intr,rsize=32768,noatime,nocto,proto=tcp 0 0 nfs:/bigdir /home/bigdir nfs rw,hard,intr,rsize=32768,noatime,nocto,proto=tcp,bg 0 0 The issue is that from time to time the "/home/bigdir" folder will be empty, even though mtab think that the share is still mounted. nfsstat et. al. do also think the share is still mounted. Only thing that works is by unmounting, and then (re)mounting the bigdir share. The server side is a NetApp. The client side is RHEL5.5, 2.6.18-194 kernel (Yes, I know 5.8 is out, but as far as I can see there are no erratas for this particular issue). I can use various hacks like automount, or mounting it to another path and then using --mount bind, but I would like to fix the underlying issue. -- Best regards Espen Fjellvær Olsen

    Read the article

  • Where can I find logs for SFTP?

    - by Jake
    I'm trying to set up sftp-server but the client is getting an error, Connection closed by server with exitcode 1 /var/log/auth.log (below) doesn't help much, how can I find out what the error is? I'm running Ubuntu 10.04.1 LTS sshd[27236]: Accepted password for theuser from (my ip) port 13547 ssh2 sshd[27236]: pam_unix(sshd:session): session opened for user theuser by (uid=0) sshd[27300]: subsystem request for sftp sshd[27236]: pam_unix(sshd:session): session closed for user theuser Update: I've been prodding this for a while now, I've got the sftp command on another server giving me a more useful error. Request for subsystem 'sftp' failed on channel 0 Couldn't read packet: Connection reset by peer Everything I've found on the net suggests this id a problem with sftp-server but when I remove the chroot from sshd config I can access the system. I assume this means sftp-server is accessible and set up correctly.

    Read the article

  • Ubuntu 9.04 Cannot Connect to visible open wifi ap (reason 6)

    - by Andrew Bolster
    I'm travelling currently so the last network i connected successfully to was my home wpa-psk network. I hadn't tried anything until i got to my accommodation that is an open network (that I'm on now on the Win7 partition on my laptop). The network (and a similar archetypical 'linksys' open network, aswell as some protected local networks are correctly displayed in network-manager and upon selection, it happily spins around to its hearts content for a while before saying 'no chance boy'. /var/log/syslog spills out the usual combination of wpa_supplicant and kernel messages, the most interesting of are that the kernel deauthentication reason 6 response. 6 apparently means class2FrameFromNonAuthStation...Client attempted to transfer data before it was authenticated. Anyone seen anything like this? I've already tried going closer to the router to no avail. I don't remember seeing this any other time I've connected to a open AP, even if that AP is far away. (Signal strength for this AP is good, kismet says its around -57dBm, well above the threshold of -80dBm, and I've tried all the suggestions from the 'Related Questions'

    Read the article

  • How to manage preventive maintenance planning for external IT support?

    - by code-gijoe
    I am a bit puzzled by the way to handle server upgrade planning for software we maintain on remote sites. This is my case: I work for a software company that has many external clients. We are trying to be more Agile in our development so we plan to release small improvements every quarter and we wish to keep our clients informed of maintenance schedules. Instead of having angry clients that believe there ROI of our support plan is low, we want to be more proactive. Lets say we have 100 machines to take care of, is there some tool to assist me in planing the maintenance with clients? Right now I get a call from a client that is unhappy requesting we upgrade them, that is when we go into panic mode and start making calls. That is when I need to check my calendar, coordinate with the other guys, call a few times, change the date again and again until everyone is happy. Can this be done better?

    Read the article

  • What's more cost effective, Hosting your web startup on Foss or Windows?

    - by user37899
    Hi, Not coming from the windows world, I'm confused about licensing. I think my knowledge may be out of date. Before we gave up with windows web servers (IIS 2), we used to have to pay Client Access licence's. This worked out quite expensive. Is it cheaper to host 1000's of users on Windows than use Free open source software tools? http://serverfault.com/questions/124329/network-load-balancing-efficience-and-limits. This post suggests I can pay $15 a month, for unlimited users. I certainly hope that this is unbiased view, I am a professional and use the right technology for the right job. I hope i am not feeling the wrath of a windows (or linux for that matter) fanboy. Perhaps a Microsoft certified Licensing person can clear this up. Should i be recommending to startups windows servers and products over lamp? Cheers!

    Read the article

  • Cold Fusion 8 and 9 on IIS 6

    - by David Mesh
    I have a client, who against my better judgement has insisted on doing the following. They have a single IIS 6 on Win 2003 Server Enterprise. It currently runs ColdFusion 8. They want me to install ColdFusion 9 on the server without changing any of the existing sites so that they can develop in CF 9 and upgrade other sites in the future. Yes, I have begged them to use a Dev server, or run on apache on the same box. Can this even be done? Many thanks in advance! DM

    Read the article

  • anti-virus / malware solution for small non-profit network

    - by Jason
    I'm an IT volunteer at a local non-profit. I'm looking for a good AV/malware solution. We currently use a mishmash of different client solutions, and want to move to something centralized. There is no full time IT staff. What I'm looking for: centralized administration - server is Windows Server 2003 minimal admin overhead ability to do e-mail notification/alerts/reporting would be very cool 10-25 XP Clients (P3/P4 hardware) free or discounted solution for non-profits We can get a cheap license for Symantec Endpoint Protection. My past experience with Symantec has been bad, but I've heard good things about this product. However, I've also read that it's kind of a nightmare to setup and administer, and may not be worth it for the size of our network.

    Read the article

  • Access points fighting for dominance?

    - by Phillip Oldham
    We have a small office with a large number of wireless devices (a mixture of desktop machines, laptops, and wifi-enabled phones) all working from a single Apple Airport Extreme which extends our wired network. I've added another Airport Extreme for resiliency, since we've been seeing a decrease in performance and (as far as I understand) access points can only handle a small number of clients. I set the new AP to extend the current network so that the clients weren't constantly switching between different wireless networks, however as soon as this AP was configured all the wireless devices started seeing network trouble, flicking on and off. I'm assuming that this is because both APs are reasonably strong, and the client can't decide which to use. What is the best route to follow to resolve this? What I'm aiming for is wireless resiliency; preferably having two APs share the network load, or if this isn't an option then having a primary AP with a "fail-over", should the primary go down for any reason.

    Read the article

  • Altq limits not being applied to UDP transfers

    - by overkordbaever
    I have a OpenBSD server acting as a router/firewall with yhr packet filter ruleset shown below, a linux server, and a linux client. When transferring files (using netcat) by TCP, the limits are applied (for example the 100mbit limit in the example), though when transferring data by UDP, the limits aren't applied; the file always takes the same amount of time no matter the queue bandwidth limit I set (I can even turn off the queues completely, and will still get the same result). Why aren't the queuing rules applied to UDP packages? The rules used: #queue rules altq on { $int_if, $ext_if } cbq bandwidth 100Mb queue { def, low } queue def bandwidth 0Mb cbq(default) queue low bandwidth 100Mb cbq #Passrules test pass out quick from $int_if to $ext_if queue low pass in quick from $ext_if to $int_if queue low pass out quick from $ext_if to $int_if queue low pass in quick from $int_if to $ext_if queue low I suppose this may be related a question I've previously asked, though since it's more of a separate question, I suppose a separate question should be used for this

    Read the article

  • Is there value in having new developers (graduates) start as testers / bug-fixers?

    - by Nico Huysamen
    Hi Programmers Community. What are your thoughts on the following: Is there value in having new developers (graduates) start as testers / bug-fixers? There are two schools of thought here that I have come across. Having new developers (graduates) start as testers / bug-fixers / doing SLA (Service Level Agreement) work, get's them familiar with the code base. It also allows them the opportunity to learn how to read [other people's] code. Further more, by fixing bugs, they will learn certain bad and good practices, which could hopefully help them in the future. The other way of thinking though, is that if you immediately start new developers on something like testing / bug-fixing / SLA work, their appetite for the development world might go away, and/or they might leave the company and you potentially loose out on a great future resource. Is there a balance that should be kept between these two? Currently where I work there is no clear-cut definition of what new starters do. Some go directly on to client work, while some fall in to the SLA world. Should companies have such a policy? Or should it be handled on a case-by-case or opportunity-based basis? Hope to hear from some of you that have experience in this field. Thanks!

    Read the article

  • Problem with keyboard layout when OS X 10.6.3 -> Fedora 13

    - by user20196
    Hi, I have VMware installed on Fedora 13 (host OS) /amd 64bit. When I log to it from console VMware works good. I wanted to start remotely from OS X 10.6.3, so I installed NX client. Everything is fine with the keyboard layout if I do not use VMware. When I try to run the guest OS on VMware my keyboard is cut off completely. The host and the guest OSes are setup for "us" layout and for "generic 105 keys" keyboard.

    Read the article

  • Why The Athene Group Chose Fusion CRM

    - by Tony Berk
    A guest post by Vikas Bhambri, Managing Partner, The Athene Group This year, The Athene Group (www.theathenegroup.com) celebrated our tenth anniversary. The company has accomplished a lot in ten years overcoming a number of hurdles and challenges to have grown organically to a 150+ person global company with offices in the US, UK, and India and customers in the US, Canada, and Europe. Now more than ever with the current global landscape from an economic and competitive standpoint it was vital that we make some changes to remain successful for the next ten years. There were two key initiatives that we discussed internally that would enable us to successfully accomplish this – collaboration and the concept of “insight to action”. With our existing Oracle CRM On Demand platform we had components of this but not the full depth and breadth that we were looking for. When we started to discuss Fusion CRM we immediately saw several next generation tools that would embrace these two objectives. For a consulting and development organization the collaboration required between business development and consulting delivery is as important as the collaboration required during the projects between the project delivery and account management teams. The Activity Streams functionality in Fusion CRM immediately addressed the communication of key discussion topics and exchanges around our clients. Of course when we saw the Oracle Social Network (which is part of our Fusion CRM roadmap) we were blown away. The combination OSN and our CRM is going to make us more effective as we discuss and work cohesively on client engagements – ensuring mutual success for both Athene and our clients. When we looked at “insight to action” we saw that we had a great platform when folks were at their desks, unfortunately a lot of our business development and consulting folks are on the road. The Fusion Mobile Sales and Fusion Outlook Desktop provide information to our teams when they are on the go. So that they can provide real-time information and react to real-time information provided by their peers. We are in the early stages of our transformative experience with Fusion CRM but we believe the platform along with our people and processes are going to help us achieve our goals in the future.

    Read the article

< Previous Page | 549 550 551 552 553 554 555 556 557 558 559 560  | Next Page >