Search Results

Search found 23890 results on 956 pages for 'issue'.

Page 683/956 | < Previous Page | 679 680 681 682 683 684 685 686 687 688 689 690  | Next Page >

  • Why are symbolic links not working in MySQL?

    - by Eno
    I'm having an issue, I searched a lot but I'm not sure if it's related to a previous security patch. On the last version of MySQL on Debian Lenny ( 5.0.51a-24 ) I need to share one table between two db, those two db are in the same path ( /var/lib/mysql/db1 & db2 ). I created symbolic links for db2 pointing to the table in db1. When I query the same table from db2 I get this : 'ERROR 1030 (HY000): Got error 140 from storage engine' This is how it looks : test-lan:/var/lib/mysql/test3# ls -alh drwx------ 2 mysql mysql 4.0K 2010-08-30 13:28 . drwxr-xr-x 6 mysql mysql 4.0K 2010-08-30 13:29 .. lrwxrwxrwx 1 mysql mysql 28 2010-08-30 13:28 blbl.frm -> /var/lib/mysql/test/blbl.frm lrwxrwxrwx 1 mysql mysql 28 2010-08-30 13:28 blbl.MYD -> /var/lib/mysql/test/blbl.MYD lrwxrwxrwx 1 mysql mysql 28 2010-08-30 13:28 blbl.MYI -> /var/lib/mysql/test/blbl.MYI -rw-rw---- 1 mysql mysql 65 2010-08-30 13:24 db.opt I really need those symlinks, is there a way to make them working like before ? ( old MySQL-server is fine ) Thanks,

    Read the article

  • MSI Installer error 2203; how to force permissions on installer directory?

    - by goober
    [Cross-Posted on StackOverflow.com as well because the question relates to development. Feel free to let me know where it best belongs.] Hi all, I'll try to bullet-point to keep it short: Background / Issue Trying to install ASP.NET MVC 3 RC on my Windows 7 machine. Uninstalled other versions of MVC (2 and 3 Beta 1). Ran the installer -- got a generic error, 2203. Log files said that it was a permissions error on C:\Windows\Installer. Checked C:\Windows\Installer -- sure enough, it's marked as read-only. I un-checked "Read-Only" in the folder properties and applied. It appears to open the dialog and apply to all files. However, when clicking properties again, the read-only box is backed to checked. Checked the security tab of the folder -- both system and the Administrators group have full access. I checked ownership -- the Administrators group is listed as an owner. Verified that I'm in the system as an Administrator (in fact, the only account in the Administrators group besides Administrator). So, what gives? Thanks in advance for any help you can provide!

    Read the article

  • DCOM configuration: accounts with same name but different passwords problem

    - by archimed7592
    Hello, everybody! I'm experiencing troubles with DCOM configuration. Here is the case: I'm using some product which supports client-server interaction through DCOM, but the client won't get any access to the server if the attempt is being done from an account with a name which exists at the server as well, but has different password. Basically, if we try to access the server from the Administrator account which obviously present on the server machine, we will fail if client's Administrator password doesn't match server's one. After actively collaborating with the product's developer in attempts to localize the issue, he come across with resolution "can't be fixed" or, if you prefer to call a pikestaff a pikestaff than it's more likely a "don't know how to fix" resolution :). I believe there is a solution for this problem and I'm asking you, IT professionals, to help me out with this one. I do realize that the problem may be caused by the way the developer interact with DCOM and if so it can't be fixed be means of pure system configuration and the question should be asked at SO, but since I've bumped into the same behavior while working with file/printer sharing - Windows tried to simplify everything and used currently impersonated credentials to access the share, I hope the solution lies at system configuration layer. P.S. I believe that the actual software product I'm talking about is entirely irrelevant however my experience tell me that there always would be somebody who will think that it on the contrary is very relevant. Here it is: SpRecord.

    Read the article

  • How do I make a Data Validation drop-down exclude blanks?

    - by Iszi
    Related: How can I use non-adjacent cells on another sheet for a Data Validation drop-down, and only show non-blank values? For now, I've worked around the above problem by re-arranging my sheet so all the Data Validation Source cells are in one range. I'm leaving the above question open though, because I think it still poses an interesting problem. However, the issue now is that the Data Validation drop-down isn't working in the way I expected it to (and how I believe others are telling me it should). Even though I've got everything into one named range, Excel still shows blanks in a drop-down that references that range. Setup: Sheet 1 A1= (blank) B1= Header A2= 1 B2= Value1 A3= 2 B3= Value2 A4= 3 B4= Value3 A5= 4 B5= (empty) A6= 5 B6= (empty) A7= 6 B7= (empty) Sheet1!B2:B7 is named Validation Sheet2!A1 is set to use Data Validation with a Source =Validation, and in-cell drop-down. The drop-down in Sheet2!A1 shows: Value1 Value2 Value3 . . . (Dots represent blank lines) How can I get rid of these blank lines in the in-cell drop-down, while still including Sheet1!B5:B7 in the Data Validation Source? Note: I nuked the sheet, and tried it again without column A from Sheet1 (putting values from column B in the above example into column A), and it worked fine. Adding Column A back though, brought the blanks back into the Data Validation drop-down. What do I need to do to keep column A as I want it and keep the in-cell drop-down clean?

    Read the article

  • IPMI not fucntioning with Network Bonding

    - by muhammed sameer
    Hey, I am having problems with running IPMI on my servers that have network bonding enabled. Platform: CentOS release 5.3 (Final) Kernel: 2.6.18-92.el5 64bit Dell PowerEdge 1950 Ethernet controller: Broadcom Corporation NetXtreme II BCM5708 Gigabit Ethernet I have bonded the interface eth0 and eth1 as active passive, with eth0 as the active interface, below is conf description from /proc Bonding Mode: fault-tolerance (active-backup) Primary Slave: eth0 Currently Active Slave: eth0 MII Status: up MII Polling Interval (ms): 30 Up Delay (ms): 0 Down Delay (ms): 0 Slave Interface: eth0 MII Status: up Link Failure Count: 0 Permanent HW addr: 00:22:19:56:b9:cd Slave Interface: eth1 MII Status: up Link Failure Count: 0 Permanent HW addr: 00:22:19:56:b9:cf My IPMI device is as follows IPMI Device Information Interface Type: KCS (Keyboard Control Style) Specification Version: 2.0 I2C Slave Address: 0x10 NV Storage Device: Not Present Base Address: 0x0000000000000CA8 (I/O) Register Spacing: 32-bit Boundaries I Have used openIPMI as well as freeipmi both to control the chassis via the IPMI card, but on servers which have bonding enabled, the command times out, below is the full run of the command with debug info. ipmi_lan_send_cmd:opened=[0], open=[4482848] IPMI LAN host 70.87.28.115 port 623 Sending IPMI/RMCP presence ping packet ipmi_lan_send_cmd:opened=[1], open=[4482848] No response from remote controller Get Auth Capabilities command failed ipmi_lan_send_cmd:opened=[1], open=[4482848] No response from remote controller Get Auth Capabilities command failed Error: Unable to establish LAN session Failed to open LAN interface Unable to get Chassis Power Status On the other hand I configured IPMI on a box with the same specs as mentioned above without bonding and IPMI works perfectly. Has anyone faced this problem with IPMI + Bonding ? I would be thankful is someone helps circumvent this issue. Muhammed Sameer

    Read the article

  • Our application is responding different on client server

    - by WtFudgE
    I have an application running on our local server and it works perfect. However when I deploy my application to our client's server there are problems. If i copy all the sources to another local from their server, it works fine again.... I am talking about an ASP.NET application talking with windows server 2008. The server specs of our client are: intel Xeon 2.13 GHz 6 GB RAM 300 GB HDD windows server 2008 R2 Although I think it's not that important I will describe the problems I am talking about to give u a better idea: It is related to upating and deleting fields in the sql server. Whenever there is more than one user running the application, and updates or deletes (not even the same record!0, there seems to be fields updating/deleting wrongly. But as I said before, if we copy the source to another server it is not the case. So I assumed it is more configuration related. Do you guys have any idea what the issue could be? Thanks a lot EDIT: my client old me he disabled the sql related accounts Could this be related?

    Read the article

  • Cannot connect to local network shares when connected to VPN. Error: "the user name could not be found"

    - by Nick G
    I keep finding that on our small company LAN (7 users, 3 servers) that some servers keep becoming "not accessible" for the purposes of file sharing. They display the message "\SERVER is not accessible. You might not have permission to use this network resource. The user name could not be found". But I don't know why "the user name could not be found" as all the machines are on the same domain and the PDC and BDC seem to be behaving OK. EDIT: VPN seems to be the cause: It turns out I can see the server if I use the IP address (\\1.2.3.4\ etc) or the FQ active directory name (eg \server.domainname.local) but not if I use the server name on its own or a mapped network drive originally created from the "short" name. Oddly though, my machine has no issue resolving the server's DNS name as I can ping the machine name OK and it immediately comes back with the IP, however nslookup seems to fail. It seems to be a problem with how Windows looks up machine names when connected to VPNs. When I'm connected to a VPN, windows seems to use the DNS assocated with the VPN and not the one on the domain controller. This behavior to me, seems incorrect as surely that would mean connecting to any VPN would break any ability to lookup local machine names for servers and printers etc. So I guess the real question now is, how can I make my machine still search the local Active Directory DNS (the PDC) even when connected to a VPN? More info in my comments below.

    Read the article

  • How do I install OpenStack on a single Ubuntu 12.04 node?

    - by Sam Edwards
    I'm having trouble installing OpenStack in Ubuntu 12.04, for various reasons: The official Ubuntu website recommends Juju and MAAS. However, this is a single node I am trying to get OpenStack installed on, and MAAS requires "two or more nodes" according to the docs. Additionally, I don't have any experience in MAAS and Juju and would rather stick to technologies I am more familiar with so that I can debug problems that arise. I have tried StackGeek but this fails because the node only has a single Ethernet port. The node does, however, have the second hard drive required for the nova storage. I have tried DevStack but cannot log into the dashboard. The login form appears fine, but as soon as I try to submit the page, my browser begins loading indefinitely. I have tried installing straight from packages, but I get an Internal Server Error in the dashboard upon trying to log in, with no helpful logs anywhere in sight to aid me in debugging the issue. Each of these attempts was with a fresh Ubuntu 12.04 LTS setup; I'm finding it really strange that no matter what I try, I cannot get OpenStack installed. Is this even a stable/mature project? Why am I encountering so many bugs?

    Read the article

  • Windows Authentication behaves oddly when VPN'd

    - by Dan F
    Hi all We've got a few apps that rely on windows authentication - a couple of web apps with AD auth turned on and we usually connect to our SQL servers with windows auth. This normally runs without a hitch. It doesn't work so well if we're VPN'd to a client site though. SSMS Opening SSMS normally from the start menu, then picking a server that normally accepts windows auth, results in a message saying: Login failed. The login is from an untrusted domain and cannot be used with Windows authentication. (.Net SqlClient Data Provider) If I drop to a command prompt and use runas /user:domain\user to launch SSMS I can successfully windows auth to our SQL server instances with that ssms process. If I look in task manager, both copies of ssms.exe (start menu vs runas) have the same user, and I can see no discernible differences between the processes in procexp. AD Auth websites If I open IE and browse to any of our websites that require an authenticated windows user, I get the "who are you" prompt, and that dialog thinks I'm whoever the VPN user is. I can click "Use another account" and authenticate that way though. Outlook Even Outlook prompts for a username when we are VPN'd! It's affecting our Win7 and Vista machines. It's been a while since we had an XP box, but I don't recall having this issue on XP for what it's worth. The VPN connections are just using the built in windows VPN connections, they're not fancy cisco VPNs or anything of that nature. Does anyone know how to tell windows that I'd like to be my normal old primary domain user rather than the VPN user when authenticating to resources in our domain? Heck, I'd be happy with a solution that prompted me with the "who are you" if I was trying to access windows auth requiring resources on the client's VPN. Thanks! Apologies if this is more a superuser question, I wasn't sure which site it best suited. It's about networking and infrastructure and plagues all of our developers here, so I hope it's a serverfault Q.

    Read the article

  • mysql_tzinfo_to_sql missing on my system

    - by Sk1ppeR
    I ran into problem with timezones within MySQL. Long story short, my application is worldwide, and each database has it's own timezone set within the application (not the server) in the way of "Europe/Berlin", "Europe/Vienna", "America/Sao Paulo". Obviously this is unacceptable for MySQL at first per connection. I read that it handles data better if you use UTC offsets. Basically my goal is to log a field's alteration in another table using a trigger. For that I use UNIX_TIMESTAMP within the trigger. Although UNIX_TIMESTAMP() follows the global timezone for the server which obviously bothers me a lot :| So I went to search for a "per connection" solution to use inside the trigger and well I found that mysql_tzinfo_to_sql can actually import zone info (UTC offsets) from my linux's zoneinfo files. Although to my amuse, when I ran the commant I got the following: bash: mysql_tzinfo_to_sql: command not found So I'm looking for a solution to fix that. I don't want to "map" the timezone names into UTC offset just so I could use in the trigger. Is there an alternative tool? Or at least sources for this one in particular only? What kind of queries does this tool generates so I could do it manually then if there is no alternative tool. Thanks in advance on any help on the issue! P.S: The OS is Debian GNU/Linux 6.0 and the MySQL server is the one from aptitude with performance tweaks with my.cnf

    Read the article

  • Using GitOAuthPlugin for Jenkins - not working as expected

    - by Blundell
    I need some clarity and maybe a fix. I'm using this plugin to authorise who views our Jenkins ci server: https://wiki.jenkins-ci.org/display/JENKINS/Github+OAuth+Plugin As I understand it anyone who is auth'd to view one of our github project's can also login to our Jenkins box. This works I thought it would also allow the person logging in to only view the Project that they have GitHub permission on. For instance. Three projects on GitHub (A,B,C). Three builds on Jenkins. User 1 has Git access to all 3 projects (A B C). User 2 has Git access to only 1 project (A). When logging into Jenkins: User 1 can see all 3 projects ( this works ) User 2 can only see project A The problem is User 2 can also see all 3 projects when they should only see 1! Have I got this correct, and if so is this a bug? I have the settings set in Jenkins configuration Github Authorization Settings. Here we have some admin users. One organization. And none out of the 4 checkboxes ticked. (User 2, is not an admin, is not part of the org). The plugin is open sourced here: https://github.com/mocleiri/github-oauth-plugin I was trying to get Jenkins to print me the Logs from the plugin but I also failed at viewing these (to see if there was an issue). I followed these instructions: https://wiki.jenkins-ci.org/display/JENKINS/Logging It's the same concept as outlined below but using GitHub rather than manually selecting users: https://wiki.jenkins-ci.org/display/JENKINS/2012/01/03/Allow+access+to+specific+projects+for+Users%28Assigning+security+for+projects+in+Jenkins%29 Have I got this right or wrong? Is it possible to auth a Jenkins user to only see one project?

    Read the article

  • New 3TB HDD, can see full 2.7TB in Linux and Windows, but shows up as 801.6GB in BIOS

    - by Ben Lee
    I recently purchased a Seagate Barracuda 3TB drive (ST3000DM001). After installing it, my BIOS recognized it but reported the size as 801.6gb. I went ahead and booted into Linux anyway (Ubuntu 11.10 64-bit). Linux saw it as a 2.7TB. Following some online instructions (don't have the link handy, unfortunately), it looks liked converting this drive to GPT was recommended. So I used gparted to do that, then formatted it to NTFS also using gparted. (I'm using NTFS because my machine is daul-boot and I want to have access to the drive in Windows too). I rebooted to Windows (Windows 7 64-bit), and Windows also sees the drive with 2.7TB free. Everything seems to be working fine. The only issue is that my BIOS is still reporting the drive as 801.6GB. My motherboard is an ASRock 770 Extreme3 and BIOS is the latest version. Since everything seems to be working with the new drive anyway, I'm hoping that the fact that the BIOS is reporting the wrong size is not an actual problem. But honestly, I don't really know. Anyone out there more familiar with this know if this could potentially cause any problems in the future? Any way to get the BIOS to report the correct size?

    Read the article

  • MySQL table does not exist

    - by Phanindra
    I am getting following error in err file. 110803 6:51:26 InnoDB: Error: table `ims`.`temp_discoveryjobdetails` already exists in InnoDB internal InnoDB: data dictionary. Have you deleted the .frm file InnoDB: and not used DROP TABLE? Have you used DROP DATABASE InnoDB: for InnoDB tables in MySQL version <= 3.23.43? InnoDB: See the Restrictions section of the InnoDB manual. InnoDB: You can drop the orphaned table inside InnoDB by InnoDB: creating an InnoDB table with the same name in another InnoDB: database and copying the .frm file to the current database. InnoDB: Then MySQL thinks the table exists, and DROP TABLE will InnoDB: succeed. InnoDB: You can look for further help from InnoDB: http://dev.mysql.com/doc/refman/5.1/en/innodb-troubleshooting.html And when I do the same, like copying the frm file from other database to here and drop the table, i am getting following error, InnoDB: Error: trying to load index PRIMARY for table ims/temp_discoveryjobdetails InnoDB: but the index tree has been freed! 110803 6:50:26 InnoDB: Error: table `ims`.`temp_discoveryjobdetails` does not exist in the InnoDB internal InnoDB: data dictionary though MySQL is trying to drop it. InnoDB: Have you copied the .frm file of the table to the InnoDB: MySQL database directory from another database? InnoDB: You can look for further help from InnoDB: http://dev.mysql.com/doc/refman/5.1/en/innodb-troubleshooting.html Please any one help me out of this. Also can any one tell me why this error is coming. EDIT: The issue is occurring only when disk size is full and when we use Truncate table. Also this is occurring only in 5.1 version but not in 5.0 version.

    Read the article

  • Citrix Access Gateway with Citrix Receiver

    - by vm370
    I'm currently using a Citrix Access Gateway with firmware 5.0.4 to provide access over an SSL-VPN to an isolated environment, which is connected to over the Citrix Access Gateway Client, which is delivered by the device by default. However we encountered different problems, e.g. that it's somehow not possible to get it out of the autostart (not registered as a service or in the autostart?!) and it killed the Cisco VPN Client, which is used in the company and unfortunately cannot be replaced. The Cisco client can also just be used again after a procedure with cleaning the registry from all CAG Client remains, which requires a lot of effort. Because of that, I'd like to check if there is an alternative to this client, since is this is a real pain... Unfortunately I couldn't find a way to use the Receiver with the CAG yet, but if you have any resources on how to build this workaround, I'd be very happy. Thanks a lot in advance UPDATE: If there are other alternatives I'd be even more happy, since using the Receiver would also mean that there is an issue with the ICA-Client, which is also used in our environments. From my experience, the Receiver and the ICA-Client are also no good friends...

    Read the article

  • Ubuntu 10.04: Unable to Start RabbitMQ Server Post-Installation

    - by Garland W. Binns
    After installing RabbitMQ on Ubuntu 10.04 I receive a failure message that the service was unable to start. Any insight into the issue would be greatly appreciated! Below are contents of startup_log and startup_err. Startup_log: {error_logger,{{2012,7,7},{15,50,31}},"Protocol: ~p: register error: ~p~n",["inet_tcp",{{badmatch,{error,etimedout}},[{inet_tcp_dist,listen,1},{net_kernel,start_protos,4},{net_kernel,start_protos,3},{net_kernel,init_node,2},{net_kernel,init,1},{gen_server,init_it,6},{proc_lib,init_p_do_apply,3}]}]} {error_logger,{{2012,7,7},{15,50,31}},crash_report,[[{initial_call,{net_kernel,init,['Argument__1']}},{pid,<0.20.0>},{registered_name,[]},{error_info,{exit,{error,badarg},[{gen_server,init_it,6},{proc_lib,init_p_do_apply,3}]}},{ancestors,[net_sup,kernel_sup,<0.9.0>]},{messages,[]},{links,[#Port<0.100>,<0.17.0>]},{dictionary,[{longnames,false}]},{trap_exit,true},{status,running},{heap_size,987},{stack_size,24},{reductions,512}],[]]} {error_logger,{{2012,7,7},{15,50,31}},supervisor_report,[{supervisor,{local,net_sup}},{errorContext,start_error},{reason,{'EXIT',nodistribution}},{offender,[{pid,undefined},{name,net_kernel},{mfa,{net_kernel,start_link,[[rabbitmqprelaunch877,shortnames]]}},{restart_type,permanent},{shutdown,2000},{child_type,worker}]}]} {error_logger,{{2012,7,7},{15,50,31}},supervisor_report,[{supervisor,{local,kernel_sup}},{errorContext,start_error},{reason,shutdown},{offender,[{pid,undefined},{name,net_sup},{mfa,{erl_distribution,start_link,[]}},{restart_type,permanent},{shutdown,infinity},{child_type,supervisor}]}]} {error_logger,{{2012,7,7},{15,50,31}},std_info,[{application,kernel},{exited,{shutdown,{kernel,start,[normal,[]]}}},{type,permanent}]} {"Kernel pid terminated",application_controller,"{application_start_failure,kernel,{shutdown,{kernel,start,[normal,[]]}}}"} Startup_err: Crash dump was written to: erl_crash.dump Kernel pid terminated (application_controller) ({application_start_failure,kernel,{shutdown,{kernel,start,[normal,[]]}}})

    Read the article

  • Dell 15Z. Trying to output image using Mini Displayport to a Yamasaki Q270. (2560 x 1440) rez monitor. Dual DVI-D to Mini Displayport

    - by michaelcku
    I have a Dell 15Z. GT525M Video card. Updated Driver via NVIDIA. Problem is I just purchased a Yamasaki Catleap Q270 monitor. It is a (2560x1440) monitor with only a Dual DVI-D out. 15Z only has HDMI and a Mini Display Port. I got a Active Dual DVI-D to Mini Display Port Active Adapter(Linked Below). http://www.monoprice.com/products/product.asp?c_id=104&cp_id=10428&cs_id=1042802&p_id=6904&seq=1&format=2 The Setup works when i plug it into my Macbook air 13.3. However when I plug it into my 15Z it doesn't work. Tried everything I can thank of. Window control P. FN and F1. It just doesn't work. I believe the issue lies in the Mini Display Port but i can't be sure. No matter what i do, the Mini Display Port doesn't get recognized. Spent 3 hours on the phone with the XPS Tech team. They simply said it was not compatible.... Any help or suggestion would be greatly appreciated.

    Read the article

  • Probelms Intstalling Trac using apt-get Ubuntu Jaunty

    - by Ben Waine
    Hi, I'm having some issues getting apt to install trac correctly on my Ubuntu Jaunty Box. Using the command 'apt-get install trac' I get the following output: root@myserver:~# apt-get install trac Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. Since you only requested a single operation it is extremely likely that the package is simply not installable and a bug report against that package should be filed. The following information may help to resolve the situation: The following packages have unmet dependencies: trac: Depends: python-setuptools (> 0.5) but it is not installable Depends: python-pysqlite2 (>= 2.3.2) but it is not going to be installed Depends: python-subversion but it is not installable Depends: libjs-jquery but it is not installable Recommends: python-pygments (= 0.6) but it is not installable or enscript but it is not installable Recommends: python-tz but it is not installable E: Broken packages I have successfully used the command on my karmic kola desktop machine and am able to create new projects etc. I thought I might be able to solve the problem by installing all python related extensions. This produced a very similar output. I have Main, universe and multi-verse repositories enabled. Its a remote machine and I have no access to the gui. Hope someone can help, googleing failed to solve the issue or find a solution! Thanks, Ben

    Read the article

  • Embedded Spotlight does not function in Outlook 2011

    - by syntaxcollector
    I have a rather strange problem. I manage a network of about 35 Mac, and we all recently switched from Mail.app to Outlook 2011 (Please don't debate this, I've already had this conversation ad nauseum) We are using network home directories (NHD) server from a Windows file server over the SMB protocol. The problem I'm having is Spotlight does not function inside of Outlook. But ONLY inside of Outlook. The global Spotlight can find all email and contacts with Outlook, but the embedded Spotlight cannot. As a test, I took one of my users and switched them from a network home directory to a portable home directory (PHD) (this means the home folder was copied to the local hard drive). This resulted in a working Spotlight within Outlook, as soon as I switched the user back to an NHD, however, it stopped working. I have already tried erasing the Spotlight index and killing the process to force re-indexing. I have exhausted all Spotlight troubleshooting, and since the global is working that is obviously not the issue. I believe it has something to do with the Spotlight plugin Microsoft wrote that is located in /Library/Spotlight. Any ideas?

    Read the article

  • Java plugin in the browser doesn't work even though it is enabled

    - by Pratyush Nalam
    I installed the Java Development Kit (64-bit) recently and saw it includes the JRE plugin for 64-bit as well. But, since Firefox is 32-bit, I also installed JRE 32-bit version. This is what is shown in Programs and Features. Now, the problem is, the other day, I opened a site which required the Java plugin. The frame showed the regular Java loading animation and hung. Nothing happened after that. Like this: I checked Firefox's plugins section and it shows Java is enabled, so no issue there I tried other browsers - IE10 and Chrome but to no avail. It doesn't work anywhere. I saw another question which said that you have to install 64-bit then 32-bit. That's what I actually did as well. First, installed JDK 7 64-bit (which includes JRE 7 64-bit) and then installed JRE 7 32-bit. I even tried the Java website's Do I have Java? section and over there too, it just keeps spinning for ages (I have waited for more than 10-20 seconds). How do I go about now? This never happened to me in Windows 7. I am on Windows 8 Pro.

    Read the article

  • JBossMQ - Clustered Queues/NameNotFoundException: QueueConnectionFactory error

    - by mfarver
    I am trying to get an application working on a JBoss Cluster. It uses Queues internally, and the developer claims that it should work correctly in a clustered environment. I have jbossmq setup as a ha-singleton on the cluster. The application works correctly on whichever node currently is running the queue, but fails on the other nodes with a: "javax.naming.NameNotFoundException: QueueConnectionFactory not bound" error. I can look at JNDIview from the jmx-console and see that indeed the QueueConnectionFactory class only appears on the primary node in the Global context. Is there a way to see the Cluster's JNDI listing instead of each server? The steps I took from a default Jboss 4.2.3.GA installation were to use the "all" configuration. Then removed /server/all/deploy/hsqldb-ds.xml and /deploy-hasingleton/jms/hsqldb-jdbc2-service.xml, copying the example/jms/mysq-jdbc2-service.xml file into its place (editing that file to use DefaultDS instead of MySqlDS). Finally I created a mysql-ds.xml file in the deploy directory pointing "DefaultDS" at an empty database. I created a -services.xml file in the deploy directory with the queue definition. like the one below: <server> <mbean code="org.jboss.mq.server.jmx.Queue" name="jboss.mq.destination:service=Queue,name=myfirstqueue"> <depends optional-attribute-name="DestinationManager"> jboss.mq:service=DestinationManager </depends> </mbean> </server> All of the other cluster features of working, the servers list each other in the view, and sessions are replicating back and forth. The JBoss documentation is somewhat light in this area, is there another setting I might have missed? Or is this likely to be a code issue (is there different code to do a JNDI lookup in a clusted environment?) Thanks

    Read the article

  • SCCM 2012 - some remote clients unable to download some applications, 401.2 error

    - by growse
    I've got a small SCCM 2012 deployment with about 35 clients attached. Most of these clients are in the same network as the single SCCM host, but three are about 1000 miles away. Oddly, these three clients have stopped being able to download some application packages over BITS. Publishing a new package works for all the other clients, but for these three it never seems to download. If I go to the software centre, it just hangs at "0% downloaded". On the client, the DataTransfer.log says (repeatedly): CDTSJob::HandleErrors: DTS Job '{2DCBBB4C-6D84-479A-9218-885B72C834B9}' BITS Job '{E78147DD-4A26-4942-B4FD-6EC3EB77EECD}' under user 'S-1-5-18' OldErrorCount 442 NewErrorCount 443 ErrorCode 0x80072EE2 DataTransferService 30/07/2012 09:27:41 2964 (0x0B94) CDTSJob::HandleErrors: DTS Job ID='{2DCBBB4C-6D84-479A-9218-885B72C834B9}' URL='http://sccm-host:80/SMS_DP_SMSPKG$/Content_3e7f6982-6346-4f27-ae00-ad5dcb391455.1' ProtType=1 DataTransferService 30/07/2012 09:27:41 2964 (0x0B94) Cas.log says (repeatedly): Location update from CTM for content Content_3e7f6982-6346-4f27-ae00-ad5dcb391455.1 and request {AD041FCB-03D2-4FE6-A6FA-38A6B80FB2A1} ContentAccess 30/07/2012 08:33:39 5048 (0x13B8) Download location found 0 - http://lonsbrndsccm02.mcs.int.thomsonreuters.com/SMS_DP_SMSPKG$/Content_3e7f6982-6346-4f27-ae00-ad5dcb391455.1 ContentAccess 30/07/2012 08:33:39 5048 (0x13B8) Download request only, ignoring location update ContentAccess 30/07/2012 08:33:39 5048 (0x13B8) On the server, I've enabled failed request log tracing. The raw IIS log says the following: 2012-07-30 08:28:42 10.13.111.35 GET /SMS_DP_SMSPKG$/Content_3e7f6982-6346-4f27-ae00-ad5dcb391455.1/sccm /NSCP-0.4.0.172-x64.msi 80 - 10.2.27.19 Microsoft+BITS/7.5 401 2 5 293 Which is a 401.2 error, meaning access denied. The failed request log is large, but the punchline is that it chucks out a Unauthorized: Access is denied due to invalid credentials. message. All clients are members of the same domain and appear to be (otherwise) working great. I've re-installed the SCCM client, deleted and re-added the computer to SCCM. Some other packages seem to work fine, the daily anti-malware delta gets downloaded and patched without issue. Why are these packages failing?

    Read the article

  • Website does not resolve in browser but traceroute is successful

    - by Colum
    I am trying to figure out an issue. My internet is working fine, but this one website is not resolving. It works via a proxy, traceroute works: 1 192.168.1.1 (192.168.1.1) 4.205 ms 0.568 ms 0.510 ms 2 * * * 3 67.59.255.13 (67.59.255.13) 10.583 ms 7.949 ms 7.557 ms 4 67.59.255.61 (67.59.255.61) 10.256 ms 9.576 ms 13.083 ms 5 64.15.8.126 (64.15.8.126) 9.943 ms 11.929 ms 11.452 ms 6 64.15.0.217 (64.15.0.217) 14.655 ms 14.092 ms 13.771 ms 7 64.15.0.118 (64.15.0.118) 33.201 ms 34.875 ms 36.544 ms 8 xe-6-0-3.ar1.ord1.us.nlayer.net (69.31.111.169) 34.027 ms 34.957 ms 34.231 ms 9 ae1-30g.cr1.ord1.us.nlayer.net (69.31.111.133) 82.683 ms 35.138 ms 37.592 ms 10 xe-3-0-0.cr2.iad1.us.nlayer.net (69.22.142.26) 41.657 ms 34.063 ms 34.519 ms 11 ae2-30g.ar2.iad1.us.nlayer.net (69.31.31.186) 35.780 ms 36.361 ms 33.968 ms 12 as33597.xe-3-0-7.ar2.iad1.us.nlayer.net (69.31.30.230) 35.086 ms as33597.xe-3-0-7.ar2.iad1.us.nlayer.net (69.31.30.234) 38.031 ms as33597.xe-3-0-7.ar2.iad1.us.nlayer.net (69.31.30.230) 36.833 ms 13 cr1.iad2.inforelay.net (66.231.176.246) 32.595 ms cr2.iad1.inforelay.net (66.231.176.10) 31.771 ms cr1.iad2.inforelay.net (66.231.176.246) 32.622 ms 14 cr1.iad2.inforelay.net (66.231.176.246) 32.956 ms 33.625 ms !X 41.058 ms 15 * cr1.iad2.inforelay.net (66.231.176.246) 35.312 ms !X * 16 * cr1.iad2.inforelay.net (66.231.176.246) 32.814 ms !X * 17 cr1.iad2.inforelay.net (66.231.176.246) 35.459 ms !X * 53.137 ms !X Ping returns this: Request timeout for icmp_seq 0 Request timeout for icmp_seq 1 Request timeout for icmp_seq 2 Request timeout for icmp_seq 3 Request timeout for icmp_seq 4 Request timeout for icmp_seq 5 Request timeout for icmp_seq 6 But what I can not figure out is why my browsers (Firefox, Safari, Opera) can not resolve the domain. I am on a Wifi connection. What could be the problem? BTW I am on a Mac (10.6.5)

    Read the article

  • Website does not resolve in browser but traceroute is successful

    - by Colum
    I am trying to figure out an issue. My internet is working fine, but this one website is not resolving. It works via a proxy, traceroute works: 1 192.168.1.1 (192.168.1.1) 4.205 ms 0.568 ms 0.510 ms 2 * * * 3 67.59.255.13 (67.59.255.13) 10.583 ms 7.949 ms 7.557 ms 4 67.59.255.61 (67.59.255.61) 10.256 ms 9.576 ms 13.083 ms 5 64.15.8.126 (64.15.8.126) 9.943 ms 11.929 ms 11.452 ms 6 64.15.0.217 (64.15.0.217) 14.655 ms 14.092 ms 13.771 ms 7 64.15.0.118 (64.15.0.118) 33.201 ms 34.875 ms 36.544 ms 8 xe-6-0-3.ar1.ord1.us.nlayer.net (69.31.111.169) 34.027 ms 34.957 ms 34.231 ms 9 ae1-30g.cr1.ord1.us.nlayer.net (69.31.111.133) 82.683 ms 35.138 ms 37.592 ms 10 xe-3-0-0.cr2.iad1.us.nlayer.net (69.22.142.26) 41.657 ms 34.063 ms 34.519 ms 11 ae2-30g.ar2.iad1.us.nlayer.net (69.31.31.186) 35.780 ms 36.361 ms 33.968 ms 12 as33597.xe-3-0-7.ar2.iad1.us.nlayer.net (69.31.30.230) 35.086 ms as33597.xe-3-0-7.ar2.iad1.us.nlayer.net (69.31.30.234) 38.031 ms as33597.xe-3-0-7.ar2.iad1.us.nlayer.net (69.31.30.230) 36.833 ms 13 cr1.iad2.inforelay.net (66.231.176.246) 32.595 ms cr2.iad1.inforelay.net (66.231.176.10) 31.771 ms cr1.iad2.inforelay.net (66.231.176.246) 32.622 ms 14 cr1.iad2.inforelay.net (66.231.176.246) 32.956 ms 33.625 ms !X 41.058 ms 15 * cr1.iad2.inforelay.net (66.231.176.246) 35.312 ms !X * 16 * cr1.iad2.inforelay.net (66.231.176.246) 32.814 ms !X * 17 cr1.iad2.inforelay.net (66.231.176.246) 35.459 ms !X * 53.137 ms !X Ping returns this: Request timeout for icmp_seq 0 Request timeout for icmp_seq 1 Request timeout for icmp_seq 2 Request timeout for icmp_seq 3 Request timeout for icmp_seq 4 Request timeout for icmp_seq 5 Request timeout for icmp_seq 6 But what I can not figure out is why my browsers (Firefox, Safari, Opera) can not resolve the domain. I am on a Wifi connection. What could be the problem? BTW I am on a Mac (10.6.5)

    Read the article

  • PBS batch jobs - the qalter command

    - by Ryan Budney
    I've got a giant computation running on a Scientific Linux cluster. At present I have over 600 jobs parked in the queue, waiting for processor time, while a few are running. I'm trying to use the qalter command on some of the idle but scheduled jobs. I'd like to schedule them for a later time, so that other users can jump part of the queue, sort of as an act of politeness. Is this doable? For example, JOBNAME 292399 is currently idle, scheduled to be run whenever a spot in the queue opens up. But if I run qalter -a 10051000 292398 followed by qrerun 292398 I get qrerun: Request invalid for state of job 292398.euler. From the qalter documentation, I thought 10051000 refers to tomorrow (oct 5th, 10am) but perhaps I'm misunderstanding something? If I'm going about this the wrong way, please let me know. The main thing I'm looking for is a command that's easily scriptable, so that I can modify when my queued tasks get run. qalter seems good for those purposes if I can get it working. I'd rather avoid running qdel and re qsubbing the computations, as there's a bookkeeping issue on which tasks to restart (vs which ones not to). I want to avoid that kind of bookkeeping. From googling around I notice some qalter commands have rather different date formats, but the above appears to be correct, as far as I can tell from the man docs. Any help would be appreciated.

    Read the article

  • Remotely managing Scheduled Tasks on another computer: Access Denied

    - by Eptin
    I need to remotely create new scheduled tasks from a Windows 7 computer in my company (which according to this Microsoft TechNet article I should be able to do. http://technet.microsoft.com/en-us/library/cc766266.aspx ) From within Task Scheduler, on the menu I click Action Connect to another Computer. I browse for the remote computer's name (I use Check Names to verify that the name is correct) and then I check 'Connect as another user' and enter \Administrator and the local admin password. Whenever I try this, I get the error message Task Scheduler: You do not have permission to access this computer Firewall isn't the problem I am able to use Remote Desktop with this username & password combo, so I would expect it to work when remotely managing as well. The remote computer has firewall exceptions for Remote Scheduled Tasks Management, Remote Service Management, and Remote Desktop among other things. Heck, I even tried turning off the firewall for that individual computer and it still didn't work. More details: I have administrative remote access to several other Windows 7 Enterprise computers, though I log in as the local Administrator (whose administrative rights are only recognized by that local machine, not by the domain). The computer I am managing from is on the domain, and also has administrative rights that are recognized on the domain. More experimentation: If I go the other way around and remote-desktop into the other machine and from there open task scheduler then 'connect to another computer', I am able to connect back to my main computer using the username & password that is recognized by an administrator on the domain, and successfully schedule a task on my main computer. So it's not a company firewall issue that's preventing anything from working. The only permissions requirement Microsoft talks about is "The user credentials that you use to connect to the remote computer must be a member of the Administrators group on the remote computer". I'm logging in as an Administrator on each of the local machines, so why doesn't it work?

    Read the article

< Previous Page | 679 680 681 682 683 684 685 686 687 688 689 690  | Next Page >