Search Results

Search found 27042 results on 1082 pages for 'google forms'.

Page 961/1082 | < Previous Page | 957 958 959 960 961 962 963 964 965 966 967 968  | Next Page >

  • running red5 on port 80

    - by ArneLovius
    I have a red5 application http://code.google.com/p/openmeetings that runs under red5, and is accessible on port 5080 and 8443 I've installed it on Ubuntu 10.04 The eventual aim is to have it accessible via https on 443 instead of 8443, but I thought I would initially try on 80 so that any issues were just down to the port configuration and not SSL certificates. I've tried changing the port from 5080 to 80 in the red5.properties file, but it fails to start. In the red5.log I have seen ERROR o.a.coyote.http11.Http11Protocol - Error initializing endpoint java.net.BindException: Permission denied /0.0.0.0:80 In the error.log I have seen ERROR o.a.coyote.http11.Http11Protocol - Error initializing endpoint java.net.BindException: Permission denied /0.0.0.0:80 and ERROR org.red5.server.tomcat.TomcatLoader - Error loading tomcat, unable to bind connector. You may not have permission to use the selected port org.apache.catalina.LifecycleException: Protocol handler initialization failed: java.net.BindException: Permission denied /0.0.0.0:80 There is nothing else installed or running on port 80, so I presume that this is a "needs to be root" situation. I would rather not run an Internet accessible web service as root. I know that Tomcat can run on port 80 by changing “#AUTHBIND=no” to “AUTHBIND=yes” in /etc/default/tomcat6 but I have not been able to find anything similar for red5. Am I on a hiding to nothing, or is there better way than running as root ? Thanks!

    Read the article

  • Alternate way to connect a vpn through a MIFI

    - by questor
    This has gotten to be a major problem at our company and depending on who I ask, the problem either does not really exist (mfr. and vendor) or is insoluble ( according to most users including techs who know how to prove their point). The problem involves getting a normal Windows 7 system to connect to a normal Server 2008 R2 Server over a cellular router (usually called a Mifi). A very few brands/models appear to work but the majority cannot make the connection. Since it is a cellular device, there are many variables that come into play and I wondered if anyone had ever found a consistent way to either make one work or else prove to the providers that their equipment was at fault. They all specifically state “VPN use” on the sales brochures. But few if any work. And those that do are not reliable. From a standpoint of pure knowledge, I just wondered if anyone knew the real reason why they fail? Pptp, L2tp, IPsec doesn’t matter. I have not tried Shrew or OpenVPN and am using strictly MS Windows protocols. Plenty of Google Searches back up my complaints but none seem to be any closer to knowing "why" they fail, just that they do. This is a "quest for knowledge"question. I don't expect a solution. Just a reason for the problem if anyone has any ideas.

    Read the article

  • Window 7 image in vmware will allow network connection out but not http

    - by Ormis
    I am currently trying to create a set of images to deploy on my network, but I've run in to a snag. When I create my own Windows 7 image I can successfully use NAT for connecting to the network but whenever I try to access a webpage I get nothing. To be more specific, All firewalls/iptables are disabled on my host machine, my virtual machine, and my network. I can do lookups and all addresses respond correctly (i'm even using Google's DNS). On the host OS i have full connectivity. On the virtual machine I can ping any device I want and all addresses resolve correctly. Within a browser I cannot reach any page via hostname or IP. I feel almost like port 80 is being blocked but i can't find any reason this would be the case. If anyone has had this occur before, I would love some insight to the problem. I understand this question is a bit out of the norm for stackoverflow, but I've run out of ideas. Thank you for any help you can provide.

    Read the article

  • PHP output buffer settings ignored by server

    - by Ecom Evolution
    I have been trying to flush the output of certain scripts to the browser on demand, but they do not work on our production server. For instance, I tried running the "Phoca Changing Collation tool" (find it on Google) and I don't see any output until the script finishes executing. I've tried immediately flushing the buffer on other scripts that work fine on any server but this one using the following code: echo "something"; ob_flush(); flush(); Setting "ob_implicit_flush(1);" doesn't help either. The server is Apache 2.2.21 with PHP 5.2.17 running on Linux. You can see our php.ini file here if that will help: http://www.smallfiles.org/download/1123/php.ini.html This isn't the only problem we are having with the server ignoring in-script directives. The server also ignores timeout coding such as: ini_set('max_execution_time', 900*60); AND set_time_limit(86400); Script always times out at the php.ini default. Doesn't seem to matter if script is executed in IE or Firefox. Tried "ini_set('zlib.output_compression_level', 'Off');" and checked that it is "Off" in the php.ini file. The code "apache_setenv('no-gzip', 1);" causes a fatal error so tried uploading a .htaccess file with the "mod_gzip_on No" directive. Neither helps. Tried running Apache as fcgi and suphp, but same results. Server is NOT in safe mode. Pullin ma hair out!

    Read the article

  • What's with the accesses to $random_existing_file/cache/df.php?

    - by Bernd Jendrissek
    Occasionally I eyeball Apache's access_log and lately I've been noticing these accesses to URLs that I don't serve. They're correctly 404'ed, but I'd like to know just who and what is involved here. "Obviously" it's some sort of vulnerability probing; I'd like to know which. (Not that it affects me, but I like to know the score.) Here's an example: 69.89.31.206 - - [28/Nov/2012:17:36:34 +0200] "GET /cvfull.pdf/cache/df.php HTTP/1.1" 404 489 "-" "-" Oddly, all 26 attempts are to either /cache/df.php, or to /cvfull.pdf/cache/df.php - they come in pairs. A few weeks ago it was zx.php, now it's df.php - I'm assuming the target is the same. Perhaps I should be flattered that a script is thinking of hiring me. Seriously, my CV is one of only two PDF files on my site, so I can only guess that non-PDF URLs aren't interesting? I've tried Googling for "cache df php", but my Google-fu is weak at the best of times, so I can only find a few reports of other script attacks. What's the vulnerability being scanned for here?

    Read the article

  • NAT cause huge External (actually internal) bandwidth usage

    - by user67953
    We have 4 servers running in a data center, with internal IP: 192.168.3.* assigned. A hardware (FORTIGATE) firewall configured NAT, and it will lead the traffic as: external IP: 111.222.333.10 -> 192.168.3.10 www.server1.com 111.222.333.11 -> 192.168.3.11 www.server2.com 111.222.333.12 -> 192.168.3.12 www.server3.com In DNS, we have www.server1.com A 111.222.333.10 Now if I send a lot of data to www.server1.com from www.server2.com, the data will be send through 111.222.333.10 (external IP) and this cause our bandwidth usage huge (expensive!). The work around I have is to add a local host mapping to server2: 192.168.3.10 www.server1.com. That way when send files from server2 to www.server1.com, it will be internal. However, we are having more and more servers, it would be hard to manually add mapping to every server. Just wondering do we have another solution for this? Can we do something in the FORTIGATE firewall? ps. The DNS server being used is public, such as opendns, Google dns etc.

    Read the article

  • Strange ASP.NET Queue Performance Counters Behavior?

    - by LemurTech
    We have an ASP.NET 2.0 site running in classic mode. I am seeing very strange behavior in the performance counter values. Perhaps these are bugs (I've been all over Google trying to verify this, without much luck), or perhaps it is just my inexperience with monitoring these things. This PerfMon graph (http://imgur.com/Jv5io5J) represents a load test where I add up to 350 virtual users to the site, at a rate of about 1/sec, performing relatively simple page browsing. At the end of the test, I gradually taper off the number of users. This is a 4 CPU server. Machine.config settings for are at the defaults. The solid blue line is ASP.NET Apps v2.x\Requests Executing for the application in question. The profile makes perfect sense, with a quick ramp-up to 32 executing requests (minWorkerThreads x 4CPUs), followed by a slower ramp-up to 48 ((maxWorkerThreads - minWorkerThreads) x 4CPUs). The solid yellow line is ASP.NET v2.x\Requests Queued. Again, this makes sense: after the initial 32 request threads are activated, the queue begins to build as new thread initialization can't keep pace with incoming requests. But as executing requests reaches its highest possible value of 48, the counter for ASP.NET Apps v2.x\Requests Queued (green solid line) suddenly springs to life and maintains step with the yellow counter. As far as I can tell, and with no other apps running on the server, these two counters should have had the same values from the start. One other odd thing: The counter for ASP.NET v2.x\Request Wait Time (dotted yellow line) also does not spring to life until executing requests reaches 48. Shouldn't I be seeing values here from the moment ASP.NET v2.x\Requests Queued begins to build? And likewise, why would ASP.NET Apps v2.x\Request Execution Time (dotted blue) increase significantly only after that peak of 48 is reached? Shouldn't it ramp-up gradually along with queued requests?

    Read the article

  • How Could My Website Be Hacked

    - by Kiewic
    Hi! I wonder how this could happen. Someone delete my index.php files from all my domains and puts his own index.php files with the next message: Hacked by Z4i0n - Fatal Error - 2009 [Fatal Error Group Br] Site desfigurado por Z4i0n Somos: Elemento_pcx - s4r4d0 - Z4i0n - Belive Gr33tz: W4n73d - M4v3rick - Observing - MLK - l3nd4 - Soul_Fly 2009 My domain has many subdomains, but only the subdomains that can be accessed with an specific user were hacked, the rest weren't affected. I assumed that someone entered through SSH, because some of these subdomains are empty and Google doesn't know about them. But I checked the access log using the last command, but this didn't show any activity through SSH or FTP the day of the attack neither seven days before. Does anybody has an idea? I already changed my passwords. What do you recommend me to do? UPDATE My website is hosted at Dreamhost. I suppose they have the latest patches installed. But, while I was looking how they entered to my server, I found weird things. In one of my subdomains, there were many scripts for execute commands on the server, upload files, send mass emails and display compromising information. These files had been created since last December!! I have deleted those files and I'm looking for more malicious files. Maybe the security hold is an old and forgotten PHP application. This application has a file upload form protected by a password system based on sessions. One of the malicious scripts was in the uploads directory. This doesn't seem like an SQL Injection attack. Thanks for your help.

    Read the article

  • Windows XP - Power surge on hub port

    - by Swift-Tuttle
    Since last few weeks I constantly get this error, as status bar balloon: Power Surge on Hub Port - A USB device has exceeded the power limits of its hub port. Due to this now I am unable to access any USB devices properly, they get disconnected intermittently. I did quite a few things to resolve this problem, firstly obviously through the Windows help. I even tried all the things told on the Microsoft website(which essentially says is to check and update the driver) but in vain. One suggestion, I found when I google'd was to disable the USB2 controller through the Device Manager and since at every startup the System configuration comes up complaining that it has been changed etc.(On that same site it is mentioned to ignore this message.) But after everything I still cant solve this problem. Any help is much appreciated. The system is installed with Windows XP service Pack 3 and all the updates till last month. Please let me know if any other hardware info is required. **UPDATE** My laptop is about 5 years old now, its an HP with Celeron 1.4G processor. Windows XP SP3 installed. All latest windows updates installed. 2 USB ports available. BIOS is HP 68DTD ver F.0A Do I need to update my BIOS from somewhere ? or is this a hardware problem altogether?

    Read the article

  • ownCloud WebDAV interface seems to be broken

    - by Nobleleader13245
    I've been trying to host ownCloud on my server but everytime I try to it tells me this : Your web server is not yet properly setup to allow files synchronization because the WebDAV interface seems to be broken. Please double check the installation guides. This is my setup : Windows Server 2012 R2 IIS 8.5 PHP 5.5.11 ownCloud 6.0.3 MySQL 5.6.17 I tried google the error but I can't seem to find anything usefull. Some say I should try if this works : https://cloud.mcsoftworks.net/remote.php/webdav/ and yes I can navigate to this folder and I can open files from there. The calendar works and I can also just upload files from here https://cloud.mcsoftworks.net/ the only thing that doesn't seem to work is the sync client. The sync client doesn't say anything it just doesn't connect (Screenshot : http://prntscr.com/3p2apz) This is the error log : Warning core isWebDAVWorking: NO - Reason: [CURL] Error while making request: Could not resolve host: cloud.mcsoftworks.net (error code: 6) (Sabre_DAV_Exception) 2014-06-02T19:56:00+00:00 Warning core isWebDAVWorking: NO - Reason: [CURL] Error while making request: Could not resolve host: cloud.mcsoftworks.net (error code: 6) (Sabre_DAV_Exception) 2014-06-02T19:55:47+00:00 Warning core isWebDAVWorking: NO - Reason: [CURL] Error while making request: Could not resolve host: cloud.mcsoftworks.net (error code: 6) (Sabre_DAV_Exception) 2014-06-02T19:55:34+00:00 Warning core isWebDAVWorking: NO - Reason: [CURL] Error while making request: Could not resolve host: cloud.mcsoftworks.net (error code: 6) (Sabre_DAV_Exception) 2014-06-02T19:55:34+00:00 Fatal webdav Sabre_DAV_Exception_Forbidden: Path does not exist, or escaping from the base path was detected 2014-06-02T19:54:37+00:00 Fatal webdav Sabre_DAV_Exception_Forbidden: Path does not exist, or escaping from the base path was detected 2014-06-02T19:54:36+00:00 Fatal webdav Sabre_DAV_Exception_Forbidden: Path does not exist, or escaping from the base path was detected 2014-06-02T19:54:36+00:00 Fatal webdav Sabre_DAV_Exception_Forbidden: Path does not exist, or escaping from the base path was detected 2014-06-02T19:54:36+00:00 Warning core isWebDAVWorking: NO - Reason: [CURL] Error while making request: Could not resolve host: cloud.mcsoftworks.net (error code: 6) (Sabre_DAV_Exception) 2014-06-02T19:51:24+00:00 This is my php.ini : http://pastebin.com/es3MB8Uh Does anyone have any idea on how I should get this to work? I've been trying to get this to work for about 14 days now and it starts to annoy me =P

    Read the article

  • ownCloud WebDAV interface seems to be broken

    - by Nobleleader13245
    I've been trying to host ownCloud on my server but everytime I try to it tells me this : Your web server is not yet properly setup to allow files synchronization because the WebDAV interface seems to be broken. Please double check the installation guides. This is my setup : Windows Server 2012 R2 IIS 8.5 PHP 5.5.11 ownCloud 6.0.3 MySQL 5.6.17 I tried google the error but I can't seem to find anything usefull. Some say I should try if this works : [hostname]/remote.php/webdav/ and yes I can navigate to this folder and I can open files from there. The calendar works and I can also just upload files the web version of ownCloud the only thing that doesn't seem to work is the sync client. The sync client doesn't say anything it just doesn't connect (Screenshot : http://prntscr.com/3p2apz) This is the error log : Warning core isWebDAVWorking: NO - Reason: [CURL] Error while making request: Could not resolve host: cloud.mcsoftworks.net (error code: 6) (Sabre_DAV_Exception) 2014-06-02T19:56:00+00:00 Warning core isWebDAVWorking: NO - Reason: [CURL] Error while making request: Could not resolve host: cloud.mcsoftworks.net (error code: 6) (Sabre_DAV_Exception) 2014-06-02T19:55:47+00:00 Warning core isWebDAVWorking: NO - Reason: [CURL] Error while making request: Could not resolve host: cloud.mcsoftworks.net (error code: 6) (Sabre_DAV_Exception) 2014-06-02T19:55:34+00:00 Warning core isWebDAVWorking: NO - Reason: [CURL] Error while making request: Could not resolve host: cloud.mcsoftworks.net (error code: 6) (Sabre_DAV_Exception) 2014-06-02T19:55:34+00:00 Fatal webdav Sabre_DAV_Exception_Forbidden: Path does not exist, or escaping from the base path was detected 2014-06-02T19:54:37+00:00 Fatal webdav Sabre_DAV_Exception_Forbidden: Path does not exist, or escaping from the base path was detected 2014-06-02T19:54:36+00:00 Fatal webdav Sabre_DAV_Exception_Forbidden: Path does not exist, or escaping from the base path was detected 2014-06-02T19:54:36+00:00 Fatal webdav Sabre_DAV_Exception_Forbidden: Path does not exist, or escaping from the base path was detected 2014-06-02T19:54:36+00:00 Warning core isWebDAVWorking: NO - Reason: [CURL] Error while making request: Could not resolve host: cloud.mcsoftworks.net (error code: 6) (Sabre_DAV_Exception) 2014-06-02T19:51:24+00:00 This is my php.ini : http://pastebin.com/es3MB8Uh Does anyone have any idea on how I should get this to work? I've been trying to get this to work for about 14 days now and it starts to annoy me =P

    Read the article

  • Windows Server 2008 R2 DNS - One IP, multiple servers

    - by Blu Dragon
    I need opinions and examples on how to best to accomplish the setup I am looking for. I have a public-facing AD domain server with one public IP address. I have setup an external zone for example.com and I successfully have my own name servers pointing to it at ns0.example.com and ns1.example.com. I also have an internal zone for my private network at home.example.com. I am behind a router with the domain server in the DMZ. I want dev.example.com to be accessible from the outside world over https and to point to internal IP address 192.168.1.78. Likewise, I want www.example.com to be accessible from the outside world and point to internal IP address 192.168.1.79. Both dev and www servers are CentOS 5.6 VMs running inside of Hyper-V on the domain server (bad idea I know but I am limited on hardware atm). What is best way to achieve this? From what I have read and researched on Google, I may need to setup a reverse proxy but I am not sure how well that will work with SSL.

    Read the article

  • How can I setup apache+mod_proxy so when I connect to mod_proxy on interface X, it sends the traffic

    - by aspitzer
    We use a service that allots us X number of requests per IP and has allows us to setup 5 IPs with such a limit (I know.. it seems stupid they could not just up the limit 5x on one IP). Pretend I have a linux box with the following address on the internet: 66.249.90.104 - that is an Google IP and not mine... so feel free to try to hack into it :) I setup apache+mod_proxy as a forwarding proxy (ProxyRequests On). i.e. you can setup firefox to use 66.249.90.104:8080 as a proxy, and all firefox traffic comes out as 66.249.90.104. So far so good. Problem: Now I add more alias interfaces so the total looks like this: eth0: 66.249.90.104 eth0:1 66.249.90.105 eth0:2 66.249.90.106 eth0:3 66.249.90.107 eth0:4 66.249.90.108 I run apache+mod_proxy (single apache instance) which binds to all interfaces, but no matter which address I connect to use the forwarding proxy, all traffic goes out to the internet as 66.249.90.104 I have also tried running 5 different apaches, each binding to its own interface only, but that still sends the outbound request through 66.249.90.104. I was hoping to get it to work as follows: I connect to 66.249.90.108 and make a proxy request, and it goes out as 66.249.90.108. I connect to 66.249.90.107 and make a proxy request, and it goes out as 66.249.90.107. etc. Has anyone else had to deal with this issue? The fall back solution would be to just run apache on 5 separate boxes, but I would prefer it to all work on one box. Thanks!

    Read the article

  • New virtualization project and old SAN

    - by Chris
    Hi, We'll start shortly a partial virtualization of our infrastructure and consolidate a dozen servers into virtuals instances. We'll also add some client application virtualization into the mix for good measure. Two HP DL 380 with the new xeons 56xx and 96 GB of memory each running xenserver + xenapp will then take charge of most of our IT needs. So far, so good. One element that is missing from the picture is the storage part. We need some sort of shared storage to enable live motion and other HA features. We have an IBM DS 4300 SAN that we can use for that. But since it's in production since 2005, I'm not sure about such a critical role for a 5yr old part. So my question is: What is the reliability of this kind of equipment after 5 yr ? Can it last 10 yr with no or few problems ? Since our budjet is tight, not buying another SAN will be a big plus. This lead me to another question: FC disks cost an arm and a leg from IBM. When I type the replacement # in google (for example IBM 300GB 15K 4GBPS FC HDD 42D0410), I can find it at a fraction of the price at various sites. So am I stupid to buy from IBM or naive to trust 3rd party reseller ?? Thanks, Chris

    Read the article

  • Nginx's speed, and how to replicate it [migrated]

    - by Mediocre Gopher
    I'm interested in this from more than an academic standpoint rather than a practical standpoint; I don't plan on creating a production webserver to compete with nginx. What I'm wondering is how exactly nginx is so fast. The top google response for this is this thread, but it merely links to a cryptic slideshow and a general covering of different io strategies. All other results seem to simply describe how fast nginx is, rather then the reason. I tried building a simple erlang server to try to compete with nginx, but to no avail; nginx won out. All my server does is spawn a new process for each request, uses that process to read the file to a socket, then closes the file and kills the thread. It's not complicated, but given erlang's lightweight processes and underlying aio structure I thought it would compete, but nginx still wins out by a consistent 300 ms average under a heavy stress test. What is nginx doing that my simple server isn't? My first thought would be keeping files in main memory instead of tossing them between requests, but the filesystem cache does this already so I didn't think it would make that great of difference. Am I wrong? Or is there something else that I'm missing?

    Read the article

  • Safari's location bar (auto-suggest and web search)

    - by Lri
    Auto-suggest don't seem to work for queries with spaces. Am I missing something? If you select an item from the suggestion list that was matched by its title, the title is filled in before the address. Can you change it to work like in other browsers? SMRT disables searching by title completely. Can you combine Top Hit, History and Bookmarks into a single section? The preferences starting with DebugSafari4 don't work anymore. (Like DebugSafari4IncludeFancyURLCompletionList.) Can you direct unresolved addresses to something like google.com/search?q=?&btnI instead of ?.com? Like by changing keyword.URL in Firefox. Can you remove or hide the web search field? In Camino, Cruz and Fluid it can be resized to zero width. You can't circumvent the normal maximum ratio with InputFieldWidthRatio. AddressBarIncludesGoogle doesn't appear do anything in the current version. Are there fixes or workarounds to any of these? I'm lumping these issues together, because they are closely related — a lot of them were introduced when the location bar was redesigned in Safari 5. I'm also hoping to find something like an extension or a plugin that would replace the standard location bar.

    Read the article

  • How do I SSH tunnel using PuTTY or SecureCRT through gateway/proxy to development server?

    - by DAE51D
    We have some unix boxes setup in a way that to get to the development box via ssh, you have to ssh into a 'user@jumpoff' box first. There is no direct connection allowed on 'dev' via ssh from anywhere but 'jumpoff'. Furthermore, only key exchange is allowed on both servers. And you always login to the development box as 'build@dev'. It's painful to always do that hopping. I know this can be done with SOCKS or a Tunnel or something... I have setup a FreeBSD VM and I can get things to work awesome using unix ssh tools. Basically all I do is make sure my vm's ~/.ssh/id_rsa.pub key is on both jumpoff and dev and use this ~/.ssh/config file: # Development Server Host ext-dev # this must be a resolvable name for "dev" from Jumpoff Hostname 1.2.3.4 User build IdentityFile ~/.ssh/id_rsa # The Jumpoff Server Host ext Hostname 1.1.1.1 User daevid Port 22 IdentityFile ~/.ssh/id_rsa # This must come below all of the above Host ext-* ProxyCommand ssh ext nc $(echo '%h'|cut -d- -f2-) 22 Then I just simply type "ssh ext-dev" and I'm in like Flynn. The problem is I can't get this same thing to work using either PuTTY or SecureCRT -- and to be honest I've not found any tutorials that really walk me through it. I see many on setting up some kind of proxy tunnel for Firefox, but it doesn't seem to be the same concept. I've been messing with various trial and error most all day and nothing has worked (obviously) and I'm at the end of my ssh knowledge and Google searching. I found this link which seemed to be perfect, but it doesn't work for me. The "Master" connects fine, but the "client" portion doesn't connect. It tells me, the remote system refused the connection. http://www.vandyke.com/support/tips/socksproxy.html I've got the VM, PuTTY and SecureCRT all using the same public/private key pairs to make things consistent and easier to debug. Does anyone have a straight up example of how to do this in Windows?

    Read the article

  • Error when attempting to do a differential or incremental backup of Exchange using ntbackup

    - by voon
    Hi folks, We're running Small Business Server 2003 here. I was reviewing our backup procedures lately and noticed in the ntbackup logs that the differential backups of Exchange were failing with the error: (SERVERNAME)\Microsoft Information Store\First Storage Group is not a valid drive, or you do not have access. A quick search of google found this MS KB article: http://support.microsoft.com/kb/555613 However, both of the suggested fixes don't to apply to our problem. First solution is to make sure the backup media is formatted and has adequate space. Well, our backup target is a 1 TB external hard drive with about 600 gigs of free space. (A full backup of our Exchange DB is currently around 5 GB) The second suggested fix is to "perform a full backup before trying to do incremental". And again, that can't it because we are doing full backups twice a week. There are no errors in the application log, just entries for ntbackup starting and ending. I've also tested doing an differential & incremental backup onto the server's internal drive, which unsurprising still did not work. I could get around this problem by always doing a full backup of Exchange but I kind of like the idea of being space efficient with doing differential backups. Anyone got any ideas?

    Read the article

  • Is it possible to get CCM Updates Schedule using Powershel or VBScript?

    - by frogman
    I want to be able to check the CCM Updates Schedule as seen in Configuration Manager Updates tab. I've been looking around on google and I've not been able to find a consistent answer to this. I tried to create a COM object using UDA.CCMUpdatesDeployment. This allows me to successfully set the recurring schedule with SetUserDefinedSchedule method. If I try to use GetUserDefinedSchedule I only get the original values of the variables. PS> $UD = New-Object -com "UDA.CCMUpdatesDeployment" PS> $A= 101 PS> $B= 102 PS> $UD.GetUserDefinedSchedule([ref]$A, [ref]$B) PS> $A 101 PS> $B 102 PS> $UD.GetUserDefinedSchedule MemberType : Method OverloadDefinitions : {void GetUserDefinedSchedule (Variant, Variant)} TypeNameOfValue : System.Management.Automation.PSMethod Value : void GetUserDefinedSchedule (Variant, Variant) Name : GetUserDefinedSchedule IsInstance : True I actually want to be able to do this remotely for a list of servers in a text file but right now any way would do.

    Read the article

  • Windows Server 2008 R2 install reboots unexpectedly during "Completing installation" phase

    - by knda
    I am attempting to install Windows Server 2008 R2 onto a Cisco UCS C201 M2 rack mounted server but am having major difficulties and wondering if anyone has some insight or items they could recommend for me to look at to get this one resolved. Installation is being attempted via the Cisco remote console (using CIMC's Virtual dvd-rom).. following the first phase of Setup where the installation files are copied to the target hard drive, then a reboot occurs to load Setup from the HDD, mid-way in the "Completing Installation" phase the system then reboots unexpectedly. System configuration Cisco UCS C201 M2 (2RU rack mounted server) 16GB RAM, 2x 73GB 15K SAS, 4x 300GB 10k SAS Add-on cards - Intel quad-port GigE card (no fibre channel cards) Storage - LSI MegaRAID SAS 9261-8i. onboard SATA is disabled (no SATA drives connected) KVM - Belkin No physical DVD-ROM.. :( I have... Run memtest86+, no RAM faults Disabled/enabled SATA support (BIOS) Attempted install from USB DVD-ROM, no effect Attempted unattended install scripted via Cisco Configuration Manager DVD provided Removed Belkin KVM in case that was causing drama Discovered that the Cisco website is "awesome" for searching for PDFs/Drivers cough, reverted back to Google Downloaded latest LSI drivers from LSI's site and used during Server 2008 install checked Windows ISO against checksum's from MS site checked Windows ISO by using it for an install in a VM Running out of ways to troubleshoot this as I am not sure how to enable any sort of 'verbose' mode during the setup process. Next step I have planned is to remove the Intel NIC and try the installation again.. Edit: Problem was the "Cisco INTEL QUAD PT GBE" (1000/PT) .. will have to see if this card is faulty or if it's just drivers.. thanks for the help.

    Read the article

  • Server Bash Line Wrapping Over Text & In Wrong Place

    - by Pez Cuckow
    This is quite a hard problem to explain, when connecting to one of my servers using the bash shell, under any user the line wrapping is broken and has all sorts of problems. Once of which I detail in screenshots below: Other problems I experience include nano getting very confused about which line and or letter I am on, as shown by typing the same message into nano: These problems only occur when connecting as I previously mentioned to one of my servers which runs CentOs. Do you know why this is occurring and what I can do to fix it? On other servers the message works fine! Thanks for your time, Output of requested commands: Server that doesn't work properly: Working server: Could it perhaps be the custom prompt on the non working server? In .bashrc PS1='\e[1;32m\u@\h\e[m:\e[1;34m\w\e[m$ ' Commenting this out appeared to resolve the problem. Google says line wrapping errors can occur if you don't conform to these rules use the \[ escape to begin a sequence of non-printing characters, and the \] escape to signal the end of such a sequence I am not sure where this would fit in on my prompt?

    Read the article

  • Rebinding Keys on a Dell Keyboard

    - by Maarx
    I have a Dell Multimedia Keyboard, similar to this one: It has many non-standard keys, like the small circular ones across the top, and the "Multimedia" keys above INSERT/HOME/PAGE_UP. They can be rebound through simple registry entries. Some sample ones are included below: [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\AppKey] [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\AppKey\15] "ShellExecute"="C:\\Program Files (x86)\\Mozilla Firefox\\firefox.exe http://mail.google.com/mail/#inbox" [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\AppKey\16] "Association"=".cda" [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\AppKey\17] "ShellExecute"="C:\\Windows\\System32\\SnippingTool.exe" [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\AppKey\18] "ShellExecute"="calc.exe" [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\AppKey\7] "Association"="http" I've rebound the "MAIL" key to, instead of booting Outlook, to booting Firefox directed at my G-Mail account. I've rebound the button that would normally open "MY COMPUTER" to instead boot the Windows 7 "Snipping Tool", something I find very useful. Now, I'm looking to do some other things that I don't already know how to do. Note that answering this question doesn't necessarily require any knowledge about the keyboard or rebinding the keys: I can add, for any given key, a "ShellExecute" entry, and it will simply execute the following command as if it was typed at a Command Prompt. (I'm aware I dumbed that down rather significantly, but bear with me. I'm not really a Windows guy myself.) I use the volume knob for it's intended purpose, to change volume. I would like to change, however, a different key, to "reset" the Windows volume level back to exactly 50%, or, as it refers to it, "50", on it's 0-100 scale. I'm looking for the "program" (what I would type at a command prompt? these are still just Sys32 programs in the PATH, aren't they?) that, I imagine, would take arguments, to change Sound/Volume settings under Windows 7. Perhaps, for clarification, something that might take the form "C: SetVolume -slevel 50" or something.

    Read the article

  • How to create VHD disk image from a Linux live system?

    - by Federico
    Once more, I have to resort at the experts here at SuperUser, as my other sources (mainly Google ;-)) didn't prove very helpful... So basically, I would like to create a VHD image of a physical disk to be archived/accessed/maybe even mounted in a virtual machine. Now, there are dozens of articles and tutorials on how to do that on the web, but none that meets exactly the conditions I would like to achieve: I would like the destination file to be a VHD image, as Windows 7 can mount it natively, even over the network and many other programs can use it (VirtualBox, ...) The disk I'm trying to image contains a Windows XP install, so in theory, I could use the disk2vhd utility, but I would like to find a solution that doesn't require booting that Windows XP install (ie keep the disk read-only) Thus I was searching for a solution involving some sort of live system (running from a USB stic or the network) However, all the solutions that I've came across either make use of disk2vhd or use the dd command under linux, which does a complete copy of the disk (ie even empty blocks) and does not output a VHD file. Is there a tool/program under Linux that can directly create a VHD file? Or is is possible to convert a raw disk image created using dd to a VHD file, without allocating space for the empty blocks? How would you proceed? As always, any advice or comment is highly appreciated!!

    Read the article

  • Migrating mysql 4 to mysql 5

    - by Lennart Regebro
    This seems to me to be a common use case, so I'm surprised so little information is about it, so sorry if it's a duplicate, but I have searched. :) I'm migrating a clients website from one CMS to another, and of moving to newer faster machines all at the same time. As a part of this I'm moving a MySQL database from the old server to the new ones. The problem is that the old server runs MySQL 4 and the new MySQL 5. So when i do a mysqldump at the old site and then try to run it on the new site I get syntax errors. ERROR 1064 (42000) at line 178: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'BTREE (`id`), KEY `f_ChangedOnWeb` (`f_ChangedOnWeb`), KEY `f_AddressUpdate`' at line 56 I also tried to use an even older syntax by dumping with --compatible mysql323, but that just resulted in ERROR 1062 (23000) at line 2283: Duplicate entry '??????????' for key 2`... It seems to me this must be a reasonably common use case, yet I can't find any sort of help on this. Possibly all my Google searches just drown in irrelevant answers. Most seem to agree that mysqldump is the right answer, but noone mentions that you can get syntax errors...

    Read the article

  • Mac OS X: remove write-protect flag from file in Terminal

    - by Albert
    Hi, I have a file on a FAT32 volume which is shown as write-protected in Finder (so I cannot move it). Removing that write-protected flag in the information dialog works just fine. However, I have many more such files and I thus want to do it via Terminal. I already tried via 'chmod +w' but that didn't worked. 'ls -la' showed me that they are already just fine ("-rwxrwxrwx 1 az az " where az is my user account). Then I thought this might be stored in some xattr properties but 'xattr -l' didn't gave me any entry. Then I thought this might be some ACL setting (whereby I thought they would be stored as xattr but let's try it anyway) - and some Google search returned me something with 'chmod -a' or 'chmod -i' or so. All these tries only give me chmod: No ACL currently associated with file" or chmod: Failed to set ACL on file...: Operation not permitted". But I definitly have no write access to the file because I cannot move it or do any other change to it (in Terminal). Removing the write-access flag in Finder solves that.

    Read the article

< Previous Page | 957 958 959 960 961 962 963 964 965 966 967 968  | Next Page >