Search Results

Search found 19815 results on 793 pages for 'control'.

Page 633/793 | < Previous Page | 629 630 631 632 633 634 635 636 637 638 639 640  | Next Page >

  • Drive XML returning Windows Volume Shadow Service Error

    - by Ssvarc
    I'm trying to image a SATA laptop hard drive, using DriveImageXML, that is attached to my computer via a USB adapter. I'm running Win7 Ultimate 64 bit. DriveXML is returning: Could not initialize Windows Volume Shadow Service (VSS). ERROR C:\Program Files (x86)\Runtime Software\Drivelmage XML\vss64.exe failed to start. ERROR TIMEOUT Make sure VSSVC.EXE is running in your task manager. Click Help for more information. VSSVC.EXE is running in Task Manager, as is VSS64.exe. Looking at the FAQ on the Runtime webpage this turned up: Please verify in Settings-Control Panel-Administrative Tools-Services that the following services are enabled: MS Software Shadow Copy Provider Volume Shadow Copy Also make sure you are able to stop and start these services. Possible reasons for VSS failures: For VSS to work, at least one volume in your computer must be NTFS. If you use only FAT drives, VSS will not function. The required NTFS volume does not need to be identical with the volume you want to image. You should make sure that VSSVC.EXE is running in your task manager. If the problems persist, registering "oleaut.dll" and "oleaut32.dll" using "regsvr32" might help. Both of those services are running and can be started and stopped without issue. Using "regsvr32" to register ""oleaut32.dll" returns successful, but "oleaut.dll" returns: The module "oleaut.dll" failed to load. Make sure the binary is stored at the specified path or debug it to check for problems with the binary or dependent .DLL files. The specified module could not be found. Some other information that might be relevant. Browsing to the drive is successful, but accessing certain folders returns an "access" error. Windows runs a permissions adder that adds the current user profile to the NFTS permissions. Could this be the cause of the issue? DriveImage XML is running as Administrator. Thoughts?

    Read the article

  • How does SELinux affect the /home directory?

    - by Matt Solnit
    Hi everyone. I'm migrating a CentOS 5.3 system from MySQL to PostgreSQL. The way our machine is set up is that the biggest disk partition is mounted to /home. This is out of my control and is managed by the hosting provider. Anyway, we obviously want the database files to be on /home for this reason. With MySQL, we did the following: Edited my.cnf and changed the datadir setting to /home/mysql Added a new "File type" policy record (I hope I'm using the right terminology) to set /home/mysql(/.*)? to mysqld_db_t Ran restorecon -R /home/mysql to assign the labels and everything was good. With PostgreSQL, however, I did the following: Edited /etc/init.d/postgresql and changed the PGDATA and PGLOG variables to /home/pgsql/data and /home/pgsql/pgstartup.log, respectively Added a new policy record to set /home/pgsql/pgstartup.log to postgresql_log_t Added a new policy record to set /home/pgsql/data(/.*)? to postgresql_db_t Ran restorecon -R /home/pgsql to assign the labels At this point, I still cannot start PostgreSQL. pgstartup.log says: # cat pgstartup.log postmaster cannot access the server configuration file "/home/pgsql/data/postgresql.conf": Permission denied The weird thing is that I don't see any messages related to this in /var/log/messages or /var/log/secure, but if I turn off SElinux, then everything works. I made sure all the permissions are correct (600 for files and 700 for directories), as well as the ownership (postgres:postgres). Can anyone tell me what I am doing wrong? I'm using the Yum repository from commandprompt.com, version 8.3.7. EDIT: The reason my question specifically mentions the /home directory is that if I go through all these steps for any other directory, e.g. /var/lib/pgsql2 or /usr/local/pgsql, then it works as expected.

    Read the article

  • Automate Windows 7's file sharing and firewall settings

    - by nhinkle
    I am working with my school to customize Windows 7 on some new laptops we are receiving. The laptops come with Windows 7 Professional already installed, and we do not need or want to reimage them. We would however like to customize the installation once it is in place, through a series of scripts. We will also be deploying these scripts to computers which have already been set up. Most of the settings we wish to change can be done easily from the command line or with a registry file. However, there is one thing we keep getting hung up on: networking options. Is there any unattended way to set the Windows 7 networking configuration? We would like to set the following things automatically, which are found under Control Panel > Network and Sharing Center > Advanced sharing settings > Home or Work network: Turn on network discovery Turn on printer and file sharing Turn off public folder sharing Turn on password protected sharing Use user accounts and passwords to connect to other computers We also need to configure the firewall to allow the following exceptions: File and printer sharing Remote assistance Remote desktop Remote scheduled tasks management Remote service management Windows remote management I've looked around, and can't find any way to change these things - I looked into netsh, registry settings, and even used RegMon to watch while I changed the values manually, all to no avail. Google hasn't offered up anything helpful so far. If anyone could provide some insight, I would very much appreciate it. I did find out that much of this is configurable with group policy, but because these computers are in a workgroup, not a domain, I don't know of any way to take advantage of that in an unattended manner.

    Read the article

  • Could not continue scan with NOLOCK due to data movement during installation

    - by dbdev1
    Hi, I am running Windows Server 2008 Standard Edition R2 x64 and I installed SQL Server 2008 Developer Edition. All of the preliminary checks run fine (Apart from a warning about Windows Firewall and opening ports which is unrelated to this and shouldn't be an issue - I can open those ports). Half way through the actual installation, I get a popup with this error: Could not continue scan with NOLOCK due to data movement. The installation still runs to completion when I press ok. However, at the end, it states that the following services "failed": database engine services sql server replication full-text search reporting services How do I know if this actually means that anything from my installation (which is on a clean Windows Server setup - nothing else on there, no previous SQL Servers, no upgrades, etc) is missing? I know from my programming experience that locks are for concurrency control and the Microsoft help on this issue points to changing my query's lock/transactions in a certain way to fix the issue. But I am not touching any queries? Also, now that I have installed the app, when I login, I keep getting this message: TITLE: Connect to Server Cannot connect to MSSQLSERVER. ADDITIONAL INFORMATION: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) (Microsoft SQL Server, Error: 67) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&EvtSrc=MSSQLServer&EvtID=67&LinkId=20476 BUTTONS: OK I went into the Configuration Manager and enabled named pipes and restarted the service (this is something I have done before as this message is common and not serious). I have disabled Windows Firewall temporarily. I have checked the instance name against the error logs. Please advise on both of these errors. I think these two errors are related. Thanks

    Read the article

  • CoreStore Encryption Error on Mac Lion

    - by Michael
    I am trying to encrypt an external drive using diskutil CoreStorage on Mac Lion 10.7.4. I thought the only requirements were that the drive have GUID partition scheme and Journaled HFS+ file system. I think my drive is configured accordingly but when I type the following command I get an error message back: Michaels-MacBook-Pro:~ Michael$ diskutil cs convert disk2 -passphrase TestPassword Error converting disk to CoreStorage: The given file system is not supported on Core Storage (-69756) Here are the details reported for the drive in question: Michaels-MacBook-Pro:~ Michael$ diskutil list disk2 /dev/disk2 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *500.1 GB disk2 1: EFI 209.7 MB disk2s1 2: Apple_HFS Test1 499.8 GB disk2s2 Michaels-MacBook-Pro:~ Michael$ diskutil list disk2 /dev/disk2 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *500.1 GB disk2 1: EFI 209.7 MB disk2s1 2: Apple_HFS Test1 499.8 GB disk2s2 Michaels-MacBook-Pro:~ Michael$ diskutil info disk2s2 Device Identifier: disk2s2 Device Node: /dev/disk2s2 Part of Whole: disk2 Device / Media Name: Test1 Volume Name: Test1 Escaped with Unicode: Test1 Mounted: Yes Mount Point: /Volumes/Test1 Escaped with Unicode: /Volumes/Test1 File System Personality: Journaled HFS+ Type (Bundle): hfs Name (User Visible): Mac OS Extended (Journaled) Journal: Journal size 40960 KB at offset 0xe8e000 Owners: Disabled Partition Type: Apple_HFS OS Can Be Installed: Yes Media Type: Generic Protocol: FireWire SMART Status: Not Supported Volume UUID: 1024D0B8-1C45-3057-B040-AE5C3841DABF Total Size: 499.8 GB (499763888128 Bytes) (exactly 976101344 512-Byte-Blocks) Volume Free Space: 499.3 GB (499315826688 Bytes) (exactly 975226224 512-Byte-Blocks) Device Block Size: 512 Bytes Read-Only Media: No Read-Only Volume: No Ejectable: Yes Whole: No Internal: No I'm a little concerned that the "Partition Type: Apple_HFS" entry is causing the problem, but I don't know how to change that. I only seem to be able to control the "File System Personality: Journaled HFS+" in Disk Utility. Can anyone shed some light on this for me?

    Read the article

  • puppet cert mismatch in ec2

    - by Stick
    I'm setting up a puppetmaster (2.7.6) in ec2 via gems (on rhel6) and I'm running into problems with the cert names and getting the master able to talk to itself. my puppet.conf looks like this: [main] logdir = /var/log/puppet rundir = /var/run/puppet vardir = /var/lib/puppet ssldir = $vardir/ssl pluginsync = true environment = production report = true certname = master When I start the puppetmaster process the ssl directory looks like: ssl/private_keys/master.pem ssl/crl.pem ssl/public_keys/master.pem ssl/ca/ca_crl.pem ssl/ca/signed/master.pem ssl/ca/ca_crt.pem ssl/ca/ca_pub.pem ssl/ca/ca_key.pem ssl/certs/ca.pem ssl/certs/master.pem I have an /etc/hosts entry on the box to point the 'puppet' hostname to localhost so that I don't have to change the 'server' option. When I run the agent I get the following: # puppet agent --test info: Retrieving plugin err: /File[/var/lib/puppet/lib]: Failed to generate additional resources using 'eval_generate: Server hostname 'puppet' did not match server certificate; expected master err: /File[/var/lib/puppet/lib]: Could not evaluate: Server hostname 'puppet' did not match server certificate; expected master Could not retrieve file metadata for puppet://puppet/plugins: Server hostname 'puppet' did not match server certificate; expected master err: Could not retrieve catalog from remote server: Server hostname 'puppet' did not match server certificate; expected master warning: Not using cache on failed catalog err: Could not retrieve catalog; skipping run err: Could not send report: Server hostname 'puppet' did not match server certificate; expected master If I specify the certname as the server (with corresponding hosts entry) I get: # puppet agent --test --server master info: Retrieving plugin err: /File[/var/lib/puppet/lib]: Could not evaluate: Could not retrieve information from environment production source(s) puppet://master/plugins info: Caching catalog for master info: Applying configuration version '1321805956' notice: Finished catalog run in 0.05 seconds Which is success of a sort, that source error will bite me later when I'm applying manifests. I've tried a couple of other variations with using the ec2 private hostname and gotten mixed results. I'd like to avoid setting server = 'x' and use dns/hosts to control what 'puppet' resolves to in order to decide which server (plays easier with availability zones, etc)

    Read the article

  • Dell 15Z. Trying to output image using Mini Displayport to a Yamasaki Q270. (2560 x 1440) rez monitor. Dual DVI-D to Mini Displayport

    - by michaelcku
    I have a Dell 15Z. GT525M Video card. Updated Driver via NVIDIA. Problem is I just purchased a Yamasaki Catleap Q270 monitor. It is a (2560x1440) monitor with only a Dual DVI-D out. 15Z only has HDMI and a Mini Display Port. I got a Active Dual DVI-D to Mini Display Port Active Adapter(Linked Below). http://www.monoprice.com/products/product.asp?c_id=104&cp_id=10428&cs_id=1042802&p_id=6904&seq=1&format=2 The Setup works when i plug it into my Macbook air 13.3. However when I plug it into my 15Z it doesn't work. Tried everything I can thank of. Window control P. FN and F1. It just doesn't work. I believe the issue lies in the Mini Display Port but i can't be sure. No matter what i do, the Mini Display Port doesn't get recognized. Spent 3 hours on the phone with the XPS Tech team. They simply said it was not compatible.... Any help or suggestion would be greatly appreciated.

    Read the article

  • Can't start Bind9 on Ubuntu 10.04 + Plesk 10.1 - "named: no process found"

    - by bradley.ayers
    I've installed a fresh version of Ubuntu 10.04 64bit, I didn't install bind when choosing what packages should be installed in the Ubuntu installer. I downloaded the auto installer for Plesk 10.1 and installed it successfully. When I logged into the Plesk control panel and tried to change the password, it failed because it couldn't restart bind. I SSH'd into the box and tried a sudo /etc/init.d/bind9 restart and get the following: brad@ws01:/root# sudo /etc/init.d/bind9 restart * Stopping domain name service... bind9 WARNING: key file (/etc/bind/rndc.key) exists, but using default configuration file (/etc/bind/rndc.conf) rndc: connect failed: 127.0.0.1#953: connection refused named: no process found [ OK ] * Starting domain name service... bind9 [fail] Looking at tail /var/log/messages reveals a whole bunch of: Feb 23 16:08:21 ws01 kernel: [ 3840.065851] type=1503 audit(1298441301.831:31): operation="open" pid=5565 parent=5563 profile="/usr/sbin/named" requested_mask="::r" denied_mask="::r" fsuid=108 ouid=0 name="/var/named/run-root/etc/named.conf" Edit: After following ooshro's advice, bind runs, however I still get the named: no process found error: brad@ws01:/etc/apparmor.d$ sudo /etc/init.d/bind9 restart * Stopping domain name service... bind9 WARNING: key file (/etc/bind/rndc.key) exists, but using default configuration file (/etc/bind/rndc.conf) named: no process found [ OK ] * Starting domain name service... bind9 [ OK ]

    Read the article

  • Perl TDS character sets

    - by skiphoppy
    I'm using the FreeTDS driver with DBD::Sybase, connecting to an MS SQL Server. When I query certain values of certain records, I get this error: DBD::Sybase::st fetchrow_arrayref failed: OpenClient message: LAYER = (0) ORIGIN = (0) SEVERITY = (9) NUMBER = (99) Server , database Message String: WARNING! Some character(s) could not be converted into client's character set. Unconverted bytes were changed to question marks ('?'). This seems to happen for records that contain special Windows character-set characters, such as curly quotes, copied and pasted from people's Outlook and Word messages. Unfortunately, I do not have any control of this database; sanitizing the input on the way in is obviously the way to go, but is not available to me. What FreeTDS settings do I need to change to be able to successfully query these records? Additional information: The query works fine from tsql. I only get this error through Perl's DBD::Sybase interface. (Should I test through something else? I don't have the expertise yet to install PHP or Python. I've got jTDS and can use it, but I think that's a completely different implementation, not an interface to FreeTDS.) Adding client charset = UTF-8 to my freetds.conf file results in "Out of memory!" printed to STDERR.

    Read the article

  • How to restore infrared support in Windows XP?

    - by Emil Rasmussen
    My infrared port is not working on my Windows XP SP 3 (Thinkpad X60). I can see it Device Manager, and it's status is, that it is working probably. The problem manifests in several ways. Originally I was trying to transfer data from my Polar RS800CX watch, and the Polar ProTrainer 5 software gave me the this error message: "Communication port couldn't be opened". Then I tried to connect the watch to another computer, and the infrared connection was established immediately. I then tried to connect the other computer to my Thinkpad X60. The other computer again immediately showed the Wireless Link icon "'computername' is in range" in the notification area. But nothing is shown on the Thinkpad, and a file transfer to the Thinkpad is unsuccessful with an error message that reports that "the target machine actively refused it [the connection]". This brings me to conclude that the infrared support in Windows on the Thinkpad is somehow broken. This is further supported by the fact that I can't find the "Wireless Link" icon in the Control Panel and when I try to run the irprops.cpl nothing happens. So the question is, how to reinstall the infrared support? Some of the solutions that I have explored is: I have tried to disable/enable the infrared in the BIOS - to trick a re-installation. The IR modules settings on the other computer that can get a successful IR connection is an exact match to the ThinkPad that can't get a connection. The was a problem in Windows XP SP 2 with the Wireless Link icon - MS KB article - but that hotfix can't be applied on SP 3 installation. I also guess that the hotfix is included in SP 3. Any other suggestions?

    Read the article

  • Remote Desktop Connection to the same screen as on the monitor?

    - by Ricket
    I'm running Windows Home Server on, well, my server. It's in my entertainment center, hooked to my TV, and I use it to listen to music and watch movies. Right now I have a keyboard and mouse stuck beside my TV so that I'm able to load a movie. It would be nice, though, to be able to remotely control the screen. Remote Desktop Connection seems to open its own session in the background, separate from the session shown on the monitor. This doesn't do any good because things started via remote desktop must be closed or changed via remote desktop, and I can't start a movie with remote desktop and then see it from the screen. Is there a way to get Remote Desktop Connection to connect to the visible screen? I am currently using UltraVNC; it was doing the job, but it has its quirks. For instance, the problem just now that prompted me to ask this question; upon trying to connect, UltraVNC informs me that "Server closed connection - The server running as application". This is just one of several problems I've had with it, and I want something that is as reliable and low-maintenance as the built-in Remote Desktop Connection. (and free) If the answer to the above is no, I'm welcome to recommendations for a different remote desktop system to try.

    Read the article

  • administrator user unable to login, suspicious user accounts "sky$", "admin$"

    - by mks
    I have a Windows 2008 R2 Standard (64 bit) running in a virtual machine. Suddenly from yesterday onwards I am not able to login as administrator. Nobody changed the password. Both in the console as well as using remote desktop I am unable to login. Whenever I login as Administrator I am getting this error: "The user name or password is incorrect" Nothing has changed in the machine and I have logged in the past successfully both through console and via remote desktop several time on the same machine. One strange behaviour I noticed is, I am seeing some additional user accounts if I try to login as other user. The suspicious user account are: sky$ admin$ SUPPORT_388945a0 Is it created by some malware/virus? Or is it some windows hidden account? Microsoft site says that SUPPORT_388945a0 is: The Support_388945a0 account enables Help and Support Service interoperability with signed scripts. This account is primarily used to control access to signed scripts that are accessible from within Help and Support Services. Administrators can use this account to delegate the ability for an ordinary user, who does not have administrative access over a computer, to run signed scripts from links embedded within Help and Support Services. These scripts can be programmed to use the Support_388945a0 account credentials instead of the user’s credentials to perform specific administrative operations on the local computer that otherwise would not be supported by the ordinary user’s account. When the delegated user clicks on a link in Help and Support Services, the script executes under the security context of the Support_388945a0 account. This account has limited access to the computer and is disabled by default. However I am not sure from where this "admin$" and "sky$" came. Anyone has similar experience?

    Read the article

  • How do I grant a database role execute permissions on a schema? What am I doing wrong?

    - by Lewray
    I am using SQL Server 2008 Express edition. I have created a Login , User, Role and Schema. I have mapped the user to the login, and assigned the role to the user. The schema contains a number of tables and stored procedures. I would like the Role to have execute permissions on the entire schema. I have tried granting execute permission through management studio and through entering the command in a query window. GRANT EXEC ON SCHEMA::schema_name TO role_name But When I connect to the database using SQL management studio (as the login I have created) firstly I cannot see the stored procedures, but more importantly I get a permission denied error when attempting to run them. The stored procedure in question does nothing except select data from a table within the same schema. I have tried creating the stored procedure with and without the line: WITH EXECUTE AS OWNER This doesn't make any difference. I suspect that I have made an error when creating my schema, or there is an ownership issue somewhere, but I am really struggling to get something working. The only way I have successfully managed to execute the stored procedures is by granting control permissions to the role as well as execute, but I don't believe this is the correct, secure way to proceed. Any suggestions/comments would be really appreciated. Thanks.

    Read the article

  • cpanel dns only / rdns questions

    - by Clear.Cache
    I started getting IPs from ARIN directly, instead of the data center I'm colocated at. Now I have to start applying rdns myself for my clients upon request, instead of having the NOC at the DC do this. That is obvious, since I am in full control over the IP delegation and therefore have nameserver authority. The question is, how do I "create" ptr / rdns records for my clients? My current server uses Cpanel / WHM with ns1/ns2.mycompany.com I also applied those as dns nameservers in the ARIN IP's whois record. How do I create rdns for my clients? Should I install Cpanel DNS Only on a entirely separate server and use this method instead? http://layer1.cpanel.net/ If so, how can I seamlessly transition over the dns records to that new dns server, retaining my ns1/ns2.mycompany.com and their ns1 and ns2 IP addresses? Even more important: I have to change the ns1/ns2 IPs to the new ones I retrieve from ARIN. How can this be done, avoiding downtime during the dns transition? On a side note, would it be easier to just install Cpanel DNS Only on a dedicated server and just use dns1.mycompany.com and dns2.mycompany.com with their own dedicated ns1/ns2 IPs from ARIN - and utilize this dns server for customers who request rdns? Would this be a more viable solution than using our current ns1/ns2.mycompany.com Nameservers? Is Cpanel DNS Only a standalone software that does not require Cpanel/WHM on another server? Is it possible to have redundant dns servers setup using this software solely, ns1 on one server and ns2 on another? Thanks.

    Read the article

  • Not recognizing second monitor after hibernate (Windows 7, Dell D630 laptop)

    - by Brooks Moses
    I have a Dell Latitude D630 laptop which I've recently updated to Windows 7 64-bit. (The Dell site confirms that it's Windows-7-compatible.) Normally it lives in a docking station with a second monitor connected to the DVI port on the docking station, and I use the second monitor in a multi-monitor configuration with the laptop screen. Sometimes I undock the laptop and use it separately. Here's the problem: If I hibernate the laptop while undocked, and then power it back up in the docking station, it does not recognize the second monitor. By which I mean that not only does it not share the desktop onto the second monitor, but if I go into the control panel for display settings and press "Detect", it does not even detect the existence of the second monitor. I can tell it to "use the VGA port anyway" for a second monitor, but the monitor is connected to a DVI port on the docking station, so that doesn't do anything useful. If I entirely reboot the laptop while it's connected to the docking station, it has no problem recognizing the second monitor and using it. But then, if I hibernate, undock, de-hibernate while undocked and rehibernate, and then re-dock and de-hibernate, it's back to not recognizing the second monitor again. I'm reasonably certain that this is not a limitation of the hardware; this worked fine on Windows XP. I'm currently using the Windows 7 driver for my video card. I attempted to use the video driver from the Dell website for this laptop, but Dell only provides Vista 64-bit drivers, not Windows 7 64-bit drivers. Their "Windows 7 compatibility" page suggests that the Vista drivers should work, but when I attempted to install the driver, it gave me a "this operating system not supported" error and refused to install. Any further ideas?

    Read the article

  • Issues resolving DNS entries for multi-homed servers

    - by I.T. Support
    This is difficult to explain, so bear with me. We have 2 domain controllers, each multi-homed to straddle 2 internal subnets, (subnet A and subnet B) and provide dns, dhcp, and ldap authentication. Both domain controllers each have 2 DNS entries. both entries have identical host names, but correspond to subnet A & subnet B respectively (example entries shown): dc1 host 192.168.8.1 dc1 host 192.168.9.1 dc2 host 192.168.8.2 dc2 host 192.168.9.2 We also have a 3rd subnet for our dmz, (subnet C) which neither domain controller has an IP address on, but our firewall/routing tables provide access to subnet A from subnet C and vice versa, but don't allow access to subnet B from subnet C. Here's my issue. How can I force/determine which dns entry is used when a server on subnet C queries either domain controller by host name? Right now it seems to randomly pick one of the two entries, swaps out the name for the IP address and that's that. The problem is if it randomly selects the entry that corresponds to the 9.x subnet B (no access from subnet C), then the server fails to resolve. If it picks the entry for the 8.x subnet A then it resolves (firewall/routing tables defined for communication between these 2 subnets) Here's what I'd like to know: What are Best Practices (if any) for dealing with DNS resolution on subnets that the DNS servers don't have a presence on? Can I control something akin to a metric value to force an order of DNS resolution when there are multiple entries for the same host name that correspond to different IP subnets? Should I even have 2 DNS HOST entries for the same name? Here's what I'd like to avoid: Making edits to the HOSTS files of servers on subnet C to force DNS resolution of the hostname to the appropriate subnet Adding NIC's to the DC's to have them straddle the DMZ as well, thus obtaining a third DNS entry that corresponds to subnet C Again, my apologies if this was too verbose / unclear. Thanks!

    Read the article

  • Hosting a javascript api file for third party sites the way sharethis, uservoice, analytics do it.

    - by Dayson
    I'm preparing to launch a service soon which will provide third party websites a widget. The widget requires my javascript file in the website's code. Exactly the same way services like analytics, uservoice, sharethis, getclicky, etc provide you with a javascript snippet to add to your page. Therefore, my javascript file is going to be hotlinked by tons of websites which possibly receive a lot of requests too. I need advice/opinions on the following aspects: What's the right location for hosting this file? Should I use a sub-domain for it? I was thinking of something like http://api.myservice.com/js/foo.js . Remember, once websites start embedding this file, its location CANNOT change under any circumstances. Right now we can afford just one dedicated server. So I have minified my file, enabled gzip and plan to use some good cache control headers through apache. Also, in the near future when the requests pickup, I will use a http proxy like Varnish. Is this a good plan for the near future? Should I be considering a CDN in the future (since we can't afford it now)? If so how do I make sure we're prepared to migrate to it without breaking services. Pros/Cons of moving just this file to a CDN? Also, since its just one javascript file(50kb), any affordable CDN so we could consider it in the beginning itself? Any other word of advice I could use? Anything I shouldn't overlook at this stage which I would regret later? (both in terms of server + javascript ajax limitations) Thanks in advance.

    Read the article

  • Lion server profile manager, device enrollment doesn't work

    - by user964406
    I am in the process of setting up Lion Servers profile manager to manage iPads on our local school network. I don't need to manage them while they are outside the network. I have successfully had it working on my personal network. The school network is behind a proxy which we have no control over. I can get the iPads to view the mydevices page and install a trust cert. I have managed to get an iPad to successfully install the remote management profile. After this the profile manager bugs out. It will list the active task of 'new device (sending)' but it's unable to complete the task. If I click on the device on profile manager and try any of the actions out they will all fail to complete. I am using the auto generated certificates and this works if I bring the server and iPad outside of the school network. Shortly after device enrollment the system log on the Lion server reports the following Replaced the actual ip address with INTERNALIP Jun 4 08:40:53 mini sandboxd[760] ([778]): applepushservice(778) deny network-outbound INTERNALIP:8080 Jun 4 08:40:53 mini sandboxd[760] ([778]): applepushservice(778) deny network-outbound INTERNALIP:8080 Jun 4 08:40:53 mini applepushserviced[778]: Got connection error Error Domain=NSPOSIXErrorDomain Code=1 "The operation couldn\u2019t be completed. Operation not permitted" UserInfo=0x7fa483b1a340 {NSErrorFailingURLStringKey=https://albert.apple.com/WebObjects/ALUnbrick.woa/wa/deviceActivation?device=Mac OS, NSErrorFailingURLKey=https://albert.apple.com/WebObjects/ALUnbrick.woa/wa/deviceActivation?device=Mac OS} Jun 4 08:40:53 mini applepushserviced[778]: Failed to get client cert on attempt 2, will retry in 15 seconds Does anyone have any ideas on how to get past this stage? Thanks in advance.

    Read the article

  • Windows Server 2008 SMTP & POP3 Configuration

    - by Alex Hope O'Connor
    This is the first time I have ever configured a VPS server without 3rd party applications such as Plesk control panel. I have got most functionality working in all my websites except I am very unsure as to how I can setup my email functionality on this new server. Basically I want the standard POP3 functionality, a bunch of accounts with private boxes, all able to send and receive emails using their individual usernames and passwords. My server setup is pretty simple, its a VPS with IIS & DNS Server running. What I have tried to do to setup SMTP & POP3 is adding the SMTP Server feature through the Server Manager Console (very unsure of the configuration as guides I found did not explain), I then installed a 3rd party application called Visdeno SMTP Extender as it claims to be a POP3 service providing accounting and the ability to communicate with email clients. That is as far as I have gotten as I can not seem to find too much information on the subject. So can someone please tell me how to go about configuring these services in order to provide standard SMTP & POP3 functionality? Thanks, Alex.

    Read the article

  • Broken UAC, cant edit File/folders or change settings in user account

    - by Antoros
    It appears that UAC is broken cant move/delete some file/folders, when asked to use administrative right (and say yes) it shows the loading bar and then nothings gets moved/deleted, no error whatsoever opened control panel, user account and when i click on any option with the Shield (administrative rights) the mouse changes to loading and thengo back to normal, not opening any menu or showing any error Already done a sfc /scannow , no errors found Already used Microsoft's Fix it, reclycle bin broken and repaired, still the same error Used microsoftaccounts tool , this are the errors i got: Problem with microsoft account policy... <- this is the problem (didint fix) Trust this PC <- loop of redirections, cant get to trust this pc (only one with w8) Problems witht sytem registration : i think it is because of the soft system reset Some setting have sync turned off : i never configured anything to sync Rootcauses found and created logs : would like to know where the logs are saved... had to use a ".reg" file to change uac setting to never notify, thinking it would fix this, no, it stoped asking though, i can still open a cmd with Administrative rights, but cant access to UAC settings Accesed Administrator account (net user administrator /active:yes)and even with that account could change any settings so there it is, dont know what else to do in the moment (this pc broke with 8.1 update and was restored to factory configuration, it broke several drivers and kept most of the registry entries, i cant find the cause of this problem. other info: i tried to delete a file in a program folder and couldnt, downloaded unlocker to check first if it was permissions but no, it showed me a msg telling me that there wasnt any error, and if i would like to delete it, clicked on yes, and it did delete it, what amuses me is that i cant without this tool, not even using the feature that takes over ownership Edit: wow, in a not crappy pc chkdsk is fast, completed with no errors found : /

    Read the article

  • Mystery undeletable file

    - by Hugh Allen
    I can't delete C:\Config.Msi\75ce84f.rbf. it's not readonly, system or hidden it's not in use by another process (according to Process Explorer) the NT security permissions aren't the problem either - I am the owner and have Full Control ; as a double-check, the Effective Permissions tab shows that I have permission to delete. Yet trying to delete the file gives "Access is Denied" from both Explorer and cmd. I can however rename it or move it to another folder on the same drive. I can also read it and Virustotal says it's clean which is what I would expect (it's just a Windows Installer temp file - a copy of some DLL I think). The relevant line from Process Monitor is: 6:52:14.3726983 PM 112 Explorer.EXE SetDispositionInformationFile C:\Config.Msi\75ce84f.rbf CANNOT DELETE Delete: True Write 1232 Background: I'm using XP SP2. I recently repaired my Adobe Reader installation to make it the default browser plugin again instead of Foxit. (there seems to be no UI to do it otherwise?) So the installer did its thing and then asked to reboot. As is my habit when rebooting is inconvenient I declined the offer and ran pendmoves to find out what files the installer had scheduled to move / delete. It wanted to delete two files with .rbf extension (rollback files) located in C:\Config.msi\. (this applies to both even though I've been speaking about one). So I tried to delete them manually and couldn't. Does anyone have any ideas what could be preventing deletion? (and I don't think it's malware even though I'm not running AV at the moment)

    Read the article

  • Start Menu Shortcuts resisting reorganisation

    - by seusr
    Running Win7. The Start Menu was getting too big, so I've navigated to C:\ProgramData\Microsoft\Windows\Start Menu\Programs and created some sub-folders in various categories [image editors, system utilities, media players, security for example] in which to group the shortcuts. Having created the group folders, I've dragged and dropped the Start Menu shortcuts into them. However it seems all Start Menu shortcuts are not alike. Some programs, their shortcut folders having been moved from the Start Menu, spawn NEW [but empty] folders bearing their company names in the top level of the Start Menu eg AVICodec Some programs don't have Start Menu shortcut folders that can be seen in WinExplorer [or equiv] at all, even though they turn up under the Start Menu itself. It seems these can be moved where they are visible i.e. in the Start - All Programs by dragging and dropping not just onto the desired folder but by hovering to expand the desired folder and then dropping between other shortcuts there. Eg Daum, Format Factory That's not such a problem, but why it's happening is a mystery. What's going on, and how to make these shortcuts behave normally or otherwise control them properly?

    Read the article

  • E-mail hosting provider that can set up aliases with wildcards

    - by Richard Downer
    I am looking for an e-mail hosting provider that allow e-mail aliases containing wildcards. In more detail: I own my own domain. I want an e-mail hosting provider to manager e-mail for my domain. Now, to help deal with spam, I often give different e-mail addresses to different organisations. These e-mail addresses always start with the same prefix, but then differ. So, for example, I might give out these e-mail addresses: [email protected] [email protected] [email protected] I want to be able to go to the e-mail provider's control panel and set up an e-mail alias like this: [email protected] -- bounce/discard (because this address has been sold to spammers) joe-*@sample.com -- redirect to [email protected] What's I don't want to do is either have to set up every single e-mail address individually (because I make them up whenever I need them), nor do I want to have a general catch-all for any unrecognised address in my domain (because I don't want to be carpet-bombed with spam when a spammer runs a dictionary attack against my domain name.) Although this seems like a useful feature to have, it seems to be a little-known feature and I've not seen anybody advertise this feature. My current hosting provider offer this but I want to move away from them, so I need another provider that will continue to work with all the e-mail addresses I've been using for years. Alternatively, I could use mail server software that runs on Windows - I have seen some commercial packages offering this feature but they cost more than I can afford - are there any suggestions for low-cost software packages?

    Read the article

  • Basic IP address structure

    - by dannymcc
    We currently have a few servers, around 30-40 workstations and 16 phones. Each device has a static IP address. As an example the standard settings for a new workstation is; IP: 192.168.1.XXX Subnet: 255.255.255.0 Gateway: 192.168.1.99 DNS: 192.168.1.50 As I am slowly exploring new server OS's and virtualisation etc. I am getting close to wanting a wider range of IP addresses. What I would like to do is seperate the devices by IP as follows: Servers 192.168.1.XXX Workstations 192.168.2.XXX Printers 192.168.3.XXX Phones 192.168.4.XXX VM's 192.168.5.XXX Is this a bad idea, or is this a common way of doing things? My biggest concern is the phones and subnet masks. The phones are managed by our provider although I have access to the server that runs them. Would I need to change the subnet mask to 255.255.0.0 on all devices? Or only those that change? For example, the phones don't need to connect to any other devices other than other phones and the phone server. So if I have the phones on 192.168.1.XXX with a subnet mask of 255.255.255.0 and then moved everything I had complete ownership/control of to 192.168.X.XXX with a new subnet mask of 255.255.0.0. Would that work?

    Read the article

  • PHP Startup: Unable to load dynamic library 'C:"\php\php_mysql.dll' - The specified module could not be loaded

    - by Tiny
    I'm trying to upgrade php 5.4.14 from php 5.4.3 in wamp server 2.2e. I have downloaded php-5.4.14-Win32-VC9-x86 (thread safe). Extracted it under C:\wamp\bin\php. Copied wampserver.conf from C:\wamp\bin\php\php5.4.3 to C:\wamp\bin\php\php5.4.14. Renamed php.ini-development to phpForApache.ini. -The port number the wamp server has been changed in the http.conf file to 8087 from its default 80. This is mentioned here though it is about upgrading from php 5.3.5 to php 5.4.0. After this, Restarting of the wamp server and services all over again has all been done and those two versions appeared in the menu php-versions (which is opened when the icon of the server is clicked). But when I attempt to enable a library like php_mysql or php_mysqli, a warning message box appears. PHP Startup: Unable to load dynamic library 'C:"\php\php_mysql.dll' - The specified module could not be loaded. I have also tried to removing the semicolon before them in the php.ini file but to no avail. I'm running Microsoft Windows XP Professional Version 2002, service pack 3. Where might be the problem? EDIT: I have changed extension_dir from C:\php to c:\wamp\bin\php\php5.4.14\ext\ in php.ini as the answer below indicates and the library is now loaded correctly but it says, 1045 - Access denied for user 'root'@'localhost' (using password: YES) though the user name and the password are the same as they are in MySQL in the config.inc.php file under phpmyadmin. I have also tried to restart MySQL56 service from Control Panel-Services(Local) but it keeps giving the same error. Does someone know why this happens?

    Read the article

< Previous Page | 629 630 631 632 633 634 635 636 637 638 639 640  | Next Page >