Search Results

Search found 4808 results on 193 pages for 'reserved instances'.

Page 97/193 | < Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >

  • Connect to WEP Wireless Network by command line on Ubuntu

    - by Tim
    Hi, I am a newbie to both network and Linux. I am now trying to connect to a WEP wireless network by command line on my Ubuntu 8.10, because the Network Manager does not support 64 bit WEP. (1) I firstly bring down the Network Manager and then try to connect to a wireless network, whose essid is candy and password is 5673212741. But it fails as shown in the following. I wonder why and how to do it correctly? $ sudo /etc/init.d/NetworkManager stop * Stopping network connection manager NetworkManager [ OK ] $ sudo iwconfig wlan0 essid candy opendo iwconfig wlan0 key 18018ce78e open $ sudo iwconfig wlan0 key 5673212741 open $ sudo dhclient wlan0 There is already a pid file /var/run/dhclient.pid with pid 9971 killed old client process, removed PID file Internet Systems Consortium DHCP Client V3.1.1 Copyright 2004-2008 Internet Systems Consortium. All rights reserved. For info, please visit http://www.isc.org/sw/dhcp/ wmaster0: unknown hardware address type 801 wmaster0: unknown hardware address type 801 Listening on LPF/wlan0/00:0e:9b:cd:4e:18 Sending on LPF/wlan0/00:0e:9b:cd:4e:18 Sending on Socket/fallback DHCPDISCOVER on wlan0 to 255.255.255.255 port 67 interval 7 DHCPDISCOVER on wlan0 to 255.255.255.255 port 67 interval 12 DHCPDISCOVER on wlan0 to 255.255.255.255 port 67 interval 20 DHCPDISCOVER on wlan0 to 255.255.255.255 port 67 interval 13 DHCPDISCOVER on wlan0 to 255.255.255.255 port 67 interval 9 No DHCPOFFERS received. No working leases in persistent database - sleeping. $ ping www.bbc.co.uk ping: unknown host www.bbc.co.uk (2) A less important question: why the scan for wireless networ does not work after I bring down the Network Manager? $ sudo /etc/init.d/NetworkManager stop * Stopping network connection manager NetworkManager [ OK ] $ sudo iwlist wlan0 scan wlan0 Interface doesn't support scanning : Network is down Thanks and regards!

    Read the article

  • Windows 7 taskbar items not grouping properly

    - by Joel in Gö
    I don't understand the Windows 7 taskbar behaviour. For some programs it will not group the running instances, or will groups some of them but not all. I have set taskbar items to "always combine", but this has not helped. It seems to possibly be two issues: with an app that has a different taskbar icon when running than for its launcher; and for VisualStudio, when starting by double clicking a project it groups separately from when starting the IDE from the .exe. Is there any way to force the items to combine? I quite like the Win7 taskbar, and would like it to work consistently...

    Read the article

  • My server appears to have been hacked+ scanssh run by zabbix is it normal?

    - by Niro
    I'm running a few EC2/Scalr instances with zabbix monitoring. I received complaints about one of my servers port scanning other servers. the logs show it is accessing port 22 on consecutive IP addresses. I looked at the processes list and saw scanssh is running under the user Zabbix. My question is- Is scanssh part of zabbix? Is it suppesd to run? I have active autodiscovery on zabbix but it is looking at another IP addresses and definately not port 20. Is it possible that something in the config of zabbix agent is controlling it and not the settings on zabbix server? What can I do to find out if zabbix is somehow misbehaving or it is a hacker? Any advice is highly appreciated.

    Read the article

  • diskmgmt.msc: Cannot delete volume from USB

    - by Notinlist
    I have an USB drive with about 8GB of size. It has a single partition of size 169MB. Don't know why, I got it that way. I wanted to delete this small (FAT32) partition and create a single NTFS volume on it. First, I noticed that the "Delete volumme" option is disabled (grayed out). I then tried "Change drive letter and paths..." and removed "F:", that way I made sure that there are no open files on it. The "Delete volume" was still disabled. Then I got suspicious, and right clicked on the "Unallocated" area and I noticed that I did not have any useful option. All "New * volume" items are disabled. I exited from diskmgmt.msc, ran a cmd.exe with administrator privileges, ran the diskmgmt.msc from it, same experiences. Why can't i do anything with this disk? I've read some advices about downloading some alternative free software, but I rather not do it if possible. I still hope that Windows 7 Enterprise 64bit alone can reinitialize an USB drive without external help. I also cannot do anything with my other 8GB pendrive. It's all an NTFS volume, I tried to delete it, but the option is disabled here too. Maybe I have some settings somewhere that prevents my from partitioning USB disks. (I have the freedom to remove my D: partition which is the second - not counding the "System reserved" - on my SSD disk.)

    Read the article

  • Multiple IP Addresses on a Traceroute Line

    - by Paul
    I'm doing a traceroute from my box to ....say.... stackoverflow.com. I see a couple of instances where there are multiple ip's on one line. For instance, in below, line #2 has two IPs: 10.1.6.5 and 10.1.4.5 Also on line #4, there are two timestamps after 216.182.236.96: 0.653 ms and 0.637 ms What are these? This is on Linux Traceroute example: traceroute to www.stackoverflow.com (198.252.206.16), 30 hops max, 60 byte packets 2 ip-10-1-6-5.us-west-1.compute.internal (10.1.6.5) 0.329 ms 0.425 ms ip-10-1-4-5.us-west-1.compute.internal (10.1.4.5) 0.471 ms 4 216.182.236.104 (216.182.236.104) 0.554 ms 216.182.236.96 (216.182.236.96) 0.653 ms 0.637 ms 5 205.251.230.64 (205.251.230.64) 0.616 ms 205.251.229.232 (205.251.229.232) 1.305 ms 205.251.230.64 (205.251.230.64) 0.573 ms

    Read the article

  • After upgrading to 2008 R2 Enterprise and installing more RAM, Windows can only see 4.00 GB

    - by Tom Crane
    (I have also posted this on technet but I'm running out of ideas) I've upgraded from Windows Server 2008 R2 Standard to Enterprise in order to make use of more RAM. The server previously had 32GB of RAM. The upgrade from Standard to Enterprise, using DISM, seemed to go OK, so I powered down and installed the RAM. This a Dell Poweredge T710, I was taking it from 32GB to 72GB. The BIOS recognised the RAM, although I needed to change from "Advanced ECC" to "Optimizer" mode for it to use all of it. After rebooting, windows can see the RAM but in the system panel will display: Installed memory (RAM): 72.0 GB (4.00 GB usable) In the resource monitor, the remainder of the RAM is showing as reserved for hardware. I've tried various RAM configurations, including reverting it to the same chips and same configuration as before the upgrade, but always just 4.00 GB is showing up as usable. Following some threads on these forums I've gone into msconfig and set the maximum memory "by hand" but that doesn't fix the problem. BIOS doesn't seem to have anything that looks like memory remapping which is another suggestion that has come up. How do I make this RAM available to Windows? It was available before the upgrade, because I could use the full 32GB RAM the server had to start with. A screenshot (this is after reverting to the original RAM configuration) http://screencast.com/t/5FuzevdNb I don't know if it's related, but my remote desktop configuration has also disappeared: screencast.com/t/mYedomeQWS (the bottom half of this dialog should allow me to configure Remote Desktop, it was working before the upgrade but now it isn't).

    Read the article

  • Elastic beanstalk access private git repo

    - by user221676
    I am trying to currently add an ssh key to my elastic beanstalk instances using .ebextensions commands. The keys I have stored are in my application code and I try to copy them to the root .ssh folder so I can access them when doing a git+ssh clone later here is an example of the config file in my .ebextensions folder packages: yum: git: [] container_commands: 01-move-ssh-keys: command: "cp .ssh/* ~root/.ssh/; chmod 400 ~root/.ssh/tca_read_rsa; chmod 400 ~root/.ssh/tca_read_rsa.pub; chmod 644 ~root/.ssh/known_hosts;" 02-add-ssh-keys: command: "ssh-add ~root/.ssh/tca_read_rsa" the problem is that I get is an error when attempting to clone the repo Host key verification failed. I have tried many ways of try to add the host to the known_hosts file but none have worked! The command that is doing the clone is npm install as the repo points to a node module

    Read the article

  • How do you determine how long it is taking Apache to forward a request to Phusion Passenger?

    - by dan
    I have a Ruby on Rails website that is serving requests relatively fast within Rails. The completion time for a Rails request is about 130ms. But the request still takes a long time because of the time it takes the Apache server in front of the Phusion Passenger instances to hand off the request to Rails. How can I measure how long it takes Apache to hand off the request to Rails via Passenger? And how can I speed this up if it's slow. Yes, I plan on switching to nginx, but I need a temporary fix.

    Read the article

  • Encrypt EC2 API call

    - by Frank
    I have to host an AMI in the Amazon Marketplace. i need to get the type of instance, whenever some user launches the AMI., like if its small medium or large. based on that i need to make some changes in the AMI when its created. I can do this with Amazon API call, to get the instance type, but the problem is that the instances created with the AMI will be started by other users, and i cannot use my AWS Credentials in the Amazon API. Is there any way that i can create an anonymous readonly user to make only specific type of EC2 API Calls? Or can i encrypt my EC2 API credentials, so no one can use it?

    Read the article

  • Help, I need to debug my BrowserHelperObject (BHO) (in C++) after a internet explorer 8 crash in Rel

    - by BHOdevelopper
    Hi, here is the situation, i'm developping a Browser Helper Object (BHO) in C++ with Visual Studio 2008, and i learned that the memory wasn't managed the same way in Debug mode than in Release mode. So when i run my BHO in debug mode, internet explorer 8 works just fine and i got no erros at all, the browser stays alive forever, but as soon as i compile it in release mode, i got no errors, no message, nothing, but after 5 minutes i can see through the task manager that internet explorer instances are just eating memory and then the browser just stop responding every time. Please, I really need some hint on how to get a feedback on what could be the error. I heard that, often it was happening because of memory mismanagement. I need a software that just grab a memory dump or something when iexplorer crashes to help me find the problem. Any help is appreciated, I'll be looking for responses every single days, thank you.

    Read the article

  • Automation Question using VMWare Workstation

    - by James K
    I'm running an experiment that requires me to create 100 instances of Windows XP w/SP3 and saving each VM instance off to a hard drive. I have to annotate the time that the VM load starts (starting my timer when I see the "Setup is preparing...") until the load ends when I see the final desktop after VM loads its drivers. I also have to annotate the host start and stop time. Is there any way this process can be automated? Each load runs me about 16:00 minutes and gets real tiresome after a time. BTW... Exact timing is not necessary, eyeballing as described above is sufficient for my testing needs.

    Read the article

  • Providing high availability and failover using MySQL on EC2

    - by crb
    I would like to have a highly-available MySQL system, with automatic failover, running on Amazon EC2 instances. The standard approach to solving this is problem Heartbeat + DRBD, but I've found a lot of posts suggesting DRBD doesn't work on EC2, though none saying exactly why. Obviously, a serial heartbeat or distinct network is out of the question in the virtualised environment. It would also be good to have the different servers be in different availability zones, but we're getting into a much harder problem there. What are peoples' opinion on having a high uptime solution in "the cloud"? Note: This question was asked before RDS with multi-AZ was announced, which is the nice automatic answer for today's modern IT professional. :)

    Read the article

  • Windows 2008 additional disk going offline with reboots on Amazon EC2

    - by Ernest Mueller
    OK, so I took the stock Windows 2008 64-bit Amazon AMI and wanted to add a D: drive for page file space and crash dumps. I launched the instance with a second EBS volume attached as xvdf and went into Disk Management set it online, and added the page file and crash dump settings and all that works. But when I reboot, the box comes back up with that second drive as "Offline." How do I get that disk to automatically come online on reboot (or most notably, when I turn this into an AMI and launch more instances off it - I've tried that too and same deal with the D:).

    Read the article

  • Vista - perform scheduled actions only if screen is not locked

    - by Syntax Error
    Ok, here's the general idea of what I want to do. After a certain time, I would like the computer to nag me to go to sleep. Maybe every five minutes or so. But I don't want the messages to pop up if the screen is locked, because I leave it like that all night. Ideally I would like to be able to do more things like shut down running instances the web browser, or lock my user session if I ignore the notices for too long. But I'm happy with just popup messages if that's all I can do. So, how much of this is possible and where do I start? I'm not too well versed with task scheduler, and I'm assuming I'll use that to at least start whatever script I have to put together.

    Read the article

  • Dell OMCI: Wacky values for Temperature and etc? (Win7x64)

    - by Yablargo
    Hey All. I am running a Dell Precision R5400 Workstation with dell OMCI installed. I am using it to test polling various data over WMI for our monitoring across the enterprise. I'm getting some weird results. perhaps someone can help point me in the direction of some clarification? Posted is the results of my DCIM\SYSMAN\DCIM_NumericSensor probe for sensor type 2(temp sensor) Microsoft (R) Windows Script Host Version 5.8 Copyright (C) Microsoft Corporation. All rights reserved. ----------------------------------- DCIM_NumericSensor instance ----------------------------------- Accuracy: AccuracyUnits: AdditionalAvailability: Availability: AvailableRequestedStates: BaseUnits: 2 Caption: CommunicationStatus: CreationClassName: DCIM_NumericSensor CurrentReading: -214748365 CurrentState: Unknown Description: DetailedStatus: DeviceID: Root/MainSystemChassis/TemperatureObj ElementName: Temperature Sensor:CPU0 EnabledDefault: 2 EnabledState: 2 EnabledThresholds: ErrorCleared: ErrorDescription: HealthState: 5 Hysteresis: IdentifyingDescriptions: InstallDate: IsLinear: LastErrorCode: LocationIndicator: LowerThresholdCritical: LowerThresholdFatal: LowerThresholdNonCritical: MaxQuiesceTime: MaxReadable: MinReadable: Name: NominalReading: NormalMax: NormalMin: OperatingStatus: OperationalStatus: 2 OtherEnabledState: OtherIdentifyingInfo: OtherSensorTypeDescription: PollingInterval: PossibleStates: Unknown,Normal,Fatal,Lower Non-Critical,Upper Non-Critical,Lower Critical,Upper Critical PowerManagementCapabilities: PowerManagementSupported: PowerOnHours: PrimaryStatus: ProgrammaticAccuracy: RateUnits: 0 RequestedState: 12 Resolution: SensorType: 2 SettableThresholds: Status: StatusDescriptions: StatusInfo: SupportedThresholds: SystemCreationClassName: DCIM_ComputerSystem SystemName: dt:5Q7BKK1 TimeOfLastStateChange: Tolerance: TotalPowerOnHours: TransitioningToState: 12 UnitModifier: 0 UpperThresholdCritical: UpperThresholdFatal: UpperThresholdNonCritical: ValueFormulation: 2 I'm not really sure whats going on, but note the CurrentReading: -214748365. I have reinstalled OMCI a few times, installed the OMCI 7x compatability and same thing I consistently get that error. It almost looks like its a issue between 32/64 bit value or something? Do I have to convert it to a float ? :)

    Read the article

  • Jquery autocomplete UI - No results on multiple fields

    - by pjammer
    Andrew's answer to my comment has sparked this question. According to his awesome answer in the link above, the code at the bottom of the question will only work for ONE widget. But it's killer nice code and makes sense... I guess I want the best of both worlds. Nice JS, (if that is possible) and to have the zero results show() just the element that we're using at the time. This code snippet is the main crux of my problem, as I see it: source: function (request, response) { jQuery.ajax({ url: "/autocomplete.json", data: { term: request.term }, success: function (data) { if (data.length == 0) { jQuery('span.guest_investor_email').show(); jQuery('span.investor_field_delete_button').show(); } response(data); } }); Currently: I have a button on my page that says "Add more Information" and each time you click it, a new instance of the autocomplete text field appears, complete with some hidden fields and a display:none; on guest_investor_email. If I use the autocomplete text field, say 3 times, and i have 3 autocomplete instances on the page and the third one finds 0 results: The code will show() all 3 instances of the guest_investor_email text field, instead of just this one that is blank. QUESTION: How do i get something like jQuery(this).siblings(('span.guest_investor_email').show(); to work? this is an Object and not an array of elements to select. If it isn't with this I don't mind, as long as I know how to get at it. Thanks. Full Code: jQuery(".auto_search_complete").live("click", function() { jQuery(this).autocomplete({ minLength: 3, source: function (request, response) { jQuery.ajax({ url: "/autocomplete.json", data: { term: request.term }, success: function (data) { if (data.length == 0) { jQuery('span.guest_investor_email').show(); jQuery('span.investor_field_delete_button').show(); } response(data); } }); }, focus: function(event, ui) { jQuery(this).val(ui.item.user ? ui.item.user.name : ui.item.pitch.name); return false; }, select: function(event, ui) { jQuery(this).val(ui.item.user ? ui.item.user.name : ui.item.pitch.name); jQuery(this).siblings('div.hidden_fields').children('.poly_id').val(ui.item.user ? ui.item.user.id : ui.item.pitch.id); jQuery(this).siblings('div.hidden_fields').children('.poly_type').val(ui.item.user ? "User" : "Pitch"); jQuery(this).siblings('span.guest_investor_email').hide(); jQuery(this).siblings('span.investor_field_delete_button').show(); jQuery(this).attr('readonly','readonly'); jQuery(this).attr('id', "investor-selected"); return false; } }).each(function() { jQuery(this).data( "autocomplete" )._renderItem = function( ul, item ) { return jQuery( "" ) .data( "item.autocomplete", item ) .append("" + (item.user ? item.user.name : item.pitch.name) + "" + (item.user ? item.user.investor_type : item.pitch.investor_type) + " - " + (item.user ? item.user.city : item.pitch.city) + "" ) .appendTo( ul ); }; }); });

    Read the article

  • Can I automatically add a new host to known_hosts ?

    - by gareth_bowles
    Here's my situation; I'm setting up a test harness that will, from a central client, launch a number of virtual machine instances and then execute commands on them via SSH. The virtual machines will have previously unused hostnames and IP addresses, so they won't be in the ~/.ssh/known_hosts file on the central client. The problem I'm having is that the first SSH command run against a new virtual instance always comes up with an interactive prompt: The authenticity of host '[hostname] ([IP address])' can't be established. RSA key fingerprint is [key fingerprint]. Are you sure you want to continue connecting (yes/no)? Is there a way that I can bypass this and get the new host to be already known to the client machine, maybe by using a public key that's already baked into the virtual machine image ? I'd really like to avoid having to use Expect or whatever to answer the interactive prompt if I can.

    Read the article

  • Performance tweaks and upgrades for VMWare Server 2

    - by sjohnston
    Our software department has a server running VMWare Server 2. We typically have 8-10 VMs running as test environments (Win XP and Server 08) for various versions of our software, and one VM that is used as a build server (Win XP). The host is running Server 2003 R2. It has 32GB RAM, 8 core Xeon 3.16GHz CPU, one disk for host OS and two raid disks for VMs. The majority of the time, this setup behaves very well and there are no complaints. Other times, the VMs can be very laggy. This is sometimes, but not always, correlated to heavy load on the build server. I'm a software developer, not an IT pro, but it seems to me that this machine should be beefy enough to handle this many VMs. Is this occasional performance hit likely just because we're hitting the limits of the hardware, or should I be looking for another culprit? From what I've read, I'm guessing if there's a bottleneck, it's probably disk I/O with all these VMs running off two disks (especially the build server). Would spreading the VMs over more disks, and/or switching to SSDs give us a significant performance boost? Other things I've read may increase performance: single virtual processor per VM removing/disabling unused virtual hardware preallocated disk space not using snapshots setting a reserved memory limit on the host and disabling VM memory swapping Can anyone confirm or deny if any of these improve performance? What other good tweaks have I missed?

    Read the article

  • Three apps going through apache. How to configure apache httpd?

    - by Chris F.
    I have a quick question but I've been struggling to find the best solution: I have two java webapps and wordpress (php) that I need to serve through my Prod website: App #1 should be accessed when pointing to www.example.com/ (this would have other url too such as "www.example.com/book") App #2 should be accessed when pointing to www.example.com/manage Finally WordPress would be accessed at www.example.com/info How can I configure apache to serve all these three instances at the same time? So far I have and it's not quite working right. Any suggestions would be much appreciated! Listen 8081 <VirtualHost *:8081> DocumentRoot /var/www/html </VirtualHost> ProxyPass /manage http://127.0.0.1:8080/manage ProxyPassReverse /manage http://127.0.0.1:8080/manage ProxyPass /info http://127.0.0.1:8081/info ProxyPassReverse /info http://127.0.0.1:8081/info ProxyPass / http://127.0.0.1:9000/ ProxyPassReverse / http://127.0.0.1:9000/

    Read the article

  • What's the utility of the return command in autohotkey?

    - by Shashank Sawant
    In the instances where the return command returns a value, the utility is obvious. I have seen the return command being used even when it is seemingly unnecessary. Let me show the following examples: Example 1: Loop { if a_index > 25 break ; Terminate the loop if a_index < 20 continue ; Skip the below and start a new iteration MsgBox, a_index = %a_index% ; This will display only the numbers 20 through 25 } Example 2: IfWinExist, Untitled - Notepad { WinActivate ; Automatically uses the window found above. return } Why is the return command used in Example 2 but is not used in Example 1? Both examples are copy-pasted/modified-pasted from the autohotkey.com's documentation.

    Read the article

  • Deploy Jetty as port 80 daemon on Linux

    - by McKAMEY
    I'm curious what techniques you Linux admin gods are using to manage your Jetty deployments. I come from a Windows Server background so I'm still getting used to all of this. I've been looking for a good solution for deploying Jetty instances as port 80 on a Linux installation. So far I've seen this thread which allows Jetty to run as a daemon: http://jira.codehaus.org/browse/JETTY-458 And I've seen this thread which talks about alternates for setting up on port 80: http://wiki.eclipse.org/Jetty/Howto/Port80 These all seemed kind of hacky. Surely there is a relatively standard way of deploying a web server like Jetty on Linux. I'm currently using CentOS 5.5 but open to other distros. Thanks in advance.

    Read the article

  • Ubuntu nasty error: The panel encountered a problem while loading xxxApplet

    - by Phuong Nguyen
    I have install ubuntu & GNOME (say, with minimum possible number of packages). I can login and do anything as I want. However, there is a nasty thing: Whenever I loggin in, I see this message: Error The panel encountered a problem while loading "OAFIID:GNOME_FastUserSwitchApplet" Do you want to delete the applet from your configuration [Don't Delete] [Delete] If I press [Delete] then the error won't be shown anymore. However, for every new created account, the message shown again (use created using sudo adduser user_name). Since I clone this OS into several virtual instance, and create new account on these instances. I wonder if there is a way to configure my ubuntu so that new created user don't have to see this annoy message? Thanks

    Read the article

  • MySQL won't stop doing stuff

    - by Felix
    Sorry for the title of the question, here's my problem: I've been trying to set up some scripts that import a lot of stuff hourly from an external source. They seemed to work fine, so I set up a cronjob to run them every hour. One day later I find six or seven instances of that script just hogging the MySQL server, making it unresponsive. I killed their processes, but MySQL was still not responding. I had to kill MySQL, reboot and then MySQL started working again (who knows on what) and being unresponsive (yes, I did remove the scripts from the cronjobs). I SHOW PROCESSLISTed and killed every process I could find. Still nothing, MySQL is hogging the HDD and is at the top of top and making the server load go up in the sky. I don't know what to do, if I kill and start it again it will probably do the same thing. What should I do?

    Read the article

  • Could not find codec parameters in ffmpeg in Windows

    - by Grienders
    While trying to convert wmv to animated gif using ffmpeg in Windows 7, I ran into an issue. Microsoft Windows [Version 6.1.7600] Copyright (c) 2009 Microsoft Corporation. All rights reserved. C:\>ffmpeg -i test.wmv test.gif ffmpeg version N-39877-g4fa706a Copyright (c) 2000-2012 the FFmpeg developers built on Apr 16 2012 14:57:12 with gcc 4.6.3 configuration: --enable-gpl --enable-version3 --disable-w32threads --enable-ru ntime-cpudetect --enable-avisynth --enable-bzlib --enable-frei0r --enable-libass --enable-libcelt --enable-libopencore-amrnb --enable-libopencore-amrwb --enable -libfreetype --enable-libgsm --enable-libmp3lame --enable-libnut --enable-libope njpeg --enable-librtmp --enable-libschroedinger --enable-libspeex --enable-libth eora --enable-libutvideo --enable-libvo-aacenc --enable-libvo-amrwbenc --enable- libvorbis --enable-libvpx --enable-libx264 --enable-libxavs --enable-libxvid --e nable-zlib libavutil 51. 46.100 / 51. 46.100 libavcodec 54. 14.101 / 54. 14.101 libavformat 54. 3.100 / 54. 3.100 libavdevice 53. 4.100 / 53. 4.100 libavfilter 2. 70.100 / 2. 70.100 libswscale 2. 1.100 / 2. 1.100 libswresample 0. 11.100 / 0. 11.100 libpostproc 52. 0.100 / 52. 0.100 [asf @ 0000000001f3ead0] Could not find codec parameters (Video: none (MTS2 / 0x 3253544D), 800x400, 30000 kb/s) test.wmv: could not find codec parameters What does this mean and how can I solve it?

    Read the article

  • Virtual hosting in Varnish with individual vcl files for configuration

    - by Michael Sørensen
    I wish to use varnish to put in front of an apache and a tomcat on the same server. Depending on the ip requested, it goes to a different backend. This works. Now for most of the sites the default varnish logic will work just fine. However for some specific sites I wish to use custom VCL code. I can test for host name and include config files for the specific domains, but this only works inside the individual methods recv etc. Is there a way to include a complete set of instructions, in one file, per domain, without having to manage separate files for subdomain_recv, subdomain_fetch etc? And preferably without running seperate instances of varnish. When I try to include a file on the "root level" of default.vcl, I get a compilation error. Best regards, Michael

    Read the article

< Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >