Search Results

Search found 48823 results on 1953 pages for 'run loop'.

Page 698/1953 | < Previous Page | 694 695 696 697 698 699 700 701 702 703 704 705  | Next Page >

  • MySQL: Replicating the MySQL database

    - by Lee
    Hi guys, I have a primary write server (server1) which replications to two servers (server2 and server3) which are query servers. I am replicating all databases to these servers including the MySQL database. When i execute a GRANT as follows replication works perfectly.. GRANT execute,select ON database1.* TO `user1`@`host` IDENTIFIED BY 'password'; However if i did the same GRANT to alter permissions on an existing user without IDENTIFIED clause replication breaks.. Error 'Can't find any matching row in the user table' on query. Default database: 'mysql'. Query: 'GRANT execute,select ON database1.* TO `user`@`host`' If I try and run the query manually i get the same error.. Server 1: mysql> SHOW VARIABLES LIKE "%version%"; +-------------------------+------------------------------------------------------------+ | Variable_name | Value | +-------------------------+------------------------------------------------------------+ | protocol_version | 10 | | version | 5.0.77-log | **my.cnf** [mysqld] datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock user=mysql old_passwords=1 symbolic-links=0 max_allowed_packet = 100M log-bin = /var/lib/mysql/logs/borg-binlog.log max_binlog_size=50M expire_logs_days=7 [mysql.server] user=mysql basedir=/var/lib [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid Server 2: mysql> SHOW VARIABLES LIKE "%version%"; +-------------------------+------------------------------------------------------------+ | Variable_name | Value | +-------------------------+------------------------------------------------------------+ | protocol_version | 10 | | version | 5.0.77-log | my.cnf server-id=12 master-host=x master-user=x master-password=x master-connect-retry=60 relay-log=/var/lib/mysql/borg-relay.log relay-log-index=/var/lib/mysql/borg-relay-log.index Thanks for taking a look Edit: Currently its running fine, until you do the grant which breaks it... mysql> show slave status\G *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: 10.128.0.5 Master_User: repli-ragnarok Master_Port: 3306 Connect_Retry: 60 Master_Log_File: borg-binlog.002730 Read_Master_Log_Pos: 4375760 Relay_Log_File: borg-relay.005489 Relay_Log_Pos: 4375899 Relay_Master_Log_File: borg-binlog.002730 Slave_IO_Running: Yes Slave_SQL_Running: Yes Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 4375760 Relay_Log_Space: 4375899 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: 0 1 row in set (0.00 sec) Edit: Broken show slave status from history +----------------------------------+-------------+----------------+-------------+---------------+--------------------+---------------------+-------------------+---------------+-----------------------+------------------+-------------------+-----------------+---------------------+--------------------+------------------------+-------------------------+-----------------------------+------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------+---------------------+-----------------+-----------------+----------------+---------------+--------------------+--------------------+--------------------+-----------------+-------------------+----------------+-----------------------+ | Slave_IO_State | Master_Host | Master_User | Master_Port | Connect_Retry | Master_Log_File | Read_Master_Log_Pos | Relay_Log_File | Relay_Log_Pos | Relay_Master_Log_File | Slave_IO_Running | Slave_SQL_Running | Replicate_Do_DB | Replicate_Ignore_DB | Replicate_Do_Table | Replicate_Ignore_Table | Replicate_Wild_Do_Table | Replicate_Wild_Ignore_Table | Last_Errno | Last_Error | Skip_Counter | Exec_Master_Log_Pos | Relay_Log_Space | Until_Condition | Until_Log_File | Until_Log_Pos | Master_SSL_Allowed | Master_SSL_CA_File | Master_SSL_CA_Path | Master_SSL_Cert | Master_SSL_Cipher | Master_SSL_Key | Seconds_Behind_Master | +----------------------------------+-------------+----------------+-------------+---------------+--------------------+---------------------+-------------------+---------------+-----------------------+------------------+-------------------+-----------------+---------------------+--------------------+------------------------+-------------------------+-----------------------------+------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------+---------------------+-----------------+-----------------+----------------+---------------+--------------------+--------------------+--------------------+-----------------+-------------------+----------------+-----------------------+ | Waiting for master to send event | 10.128.0.5 | repli-valhalla | 3306 | 60 | borg-binlog.002729 | 40429793 | borg-relay.005486 | 40311514 | borg-binlog.002729 | Yes | No | | | | | | | 1133 | Error 'Can't find any matching row in the user table' on query. Default database: 'mysql'. Query: 'GRANT execute,select ON auth_tracker.* TO `mail-sin1`@`%.sin1.netline.net.uk` IDENTIFIED BY 'mail-sin1666'' | 0 | 40311375 | 40429932 | None | | 0 | No | | | | | | NULL | +----------------------------------+-------------+----------------+-------------+---------------+--------------------+---------------------+-------------------+---------------+-----------------------+------------------+-------------------+-----------------+---------------------+--------------------+------------------------+-------------------------+-----------------------------+------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------+---------------------+-----------------+-----------------+----------------+---------------+--------------------+--------------------+--------------------+-----------------+-------------------+----------------+-----------------------+ 1 row in set (0.06 sec)

    Read the article

  • solved: puppet master REST API returns 403 when running under passenger works when master runs from command line

    - by Anadi Misra
    I am using the standard auth.conf provided in puppet install for the puppet master which is running through passenger under Nginx. However for most of the catalog, files and certitifcate request I get a 403 response. ### Authenticated paths - these apply only when the client ### has a valid certificate and is thus authenticated # allow nodes to retrieve their own catalog path ~ ^/catalog/([^/]+)$ method find allow $1 # allow nodes to retrieve their own node definition path ~ ^/node/([^/]+)$ method find allow $1 # allow all nodes to access the certificates services path ~ ^/certificate_revocation_list/ca method find allow * # allow all nodes to store their reports path /report method save allow * # unconditionally allow access to all file services # which means in practice that fileserver.conf will # still be used path /file allow * ### Unauthenticated ACL, for clients for which the current master doesn't ### have a valid certificate; we allow authenticated users, too, because ### there isn't a great harm in letting that request through. # allow access to the master CA path /certificate/ca auth any method find allow * path /certificate/ auth any method find allow * path /certificate_request auth any method find, save allow * path /facts auth any method find, search allow * # this one is not stricly necessary, but it has the merit # of showing the default policy, which is deny everything else path / auth any Puppet master however does not seems to be following this as I get this error on client [amisr1@blramisr195602 ~]$ sudo puppet agent --no-daemonize --verbose --server bangvmpllda02.XXXXX.com [sudo] password for amisr1: Starting Puppet client version 3.0.1 Warning: Unable to fetch my node definition, but the agent run will continue: Warning: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /certificate_revocation_list/ca [find] at :110 Info: Retrieving plugin Error: /File[/var/lib/puppet/lib]: Failed to generate additional resources using 'eval_generate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [search] at :110 Error: /File[/var/lib/puppet/lib]: Could not evaluate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Could not retrieve file metadata for puppet://devops.XXXXX.com/plugins: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Error: Could not retrieve catalog from remote server: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /catalog/blramisr195602.XXXXX.com [find] at :110 Using cached catalog Error: Could not retrieve catalog; skipping run Error: Could not send report: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /report/blramisr195602.XXXXX.com [save] at :110 and the server logs show XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/certificate_revocation_list/ca? HTTP/1.1" 403 102 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadatas/plugins?links=manage&recurse=true&&ignore=---+%0A++-+%22.svn%22%0A++-+CVS%0A++-+%22.git%22&checksum_type=md5 HTTP/1.1" 403 95 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadata/plugins? HTTP/1.1" 403 93 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "POST /production/catalog/blramisr195602.XXXXX.com HTTP/1.1" 403 106 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "PUT /production/report/blramisr195602.XXXXX.com HTTP/1.1" 403 105 "-" "Ruby" thefile server conf file is as follows (and goin by what they say on puppet site, It is better to regulate access in auth.conf for reaching file server and then allow file server to server all) [files] path /apps/puppet/files allow * [private] path /apps/puppet/private/%H allow * [modules] allow * I am using server and client version 3 Nginx has been compiled using the following options nginx version: nginx/1.3.9 built by gcc 4.4.6 20120305 (Red Hat 4.4.6-4) (GCC) TLS SNI support enabled configure arguments: --prefix=/apps/nginx --conf-path=/apps/nginx/nginx.conf --pid-path=/apps/nginx/run/nginx.pid --error-log-path=/apps/nginx/logs/error.log --http-log-path=/apps/nginx/logs/access.log --with-http_ssl_module --with-http_gzip_static_module --add-module=/usr/lib/ruby/gems/1.8/gems/passenger-3.0.18/ext/nginx --add-module=/apps/Downloads/nginx/nginx-auth-ldap-master/ and the standard nginx puppet master conf server { ssl on; listen 8140 ssl; server_name _; passenger_enabled on; passenger_set_cgi_param HTTP_X_CLIENT_DN $ssl_client_s_dn; passenger_set_cgi_param HTTP_X_CLIENT_VERIFY $ssl_client_verify; passenger_min_instances 5; access_log logs/puppet_access.log; error_log logs/puppet_error.log; root /apps/nginx/html/rack/public; ssl_certificate /var/lib/puppet/ssl/certs/bangvmpllda02.XXXXXX.com.pem; ssl_certificate_key /var/lib/puppet/ssl/private_keys/bangvmpllda02.XXXXXX.com.pem; ssl_crl /var/lib/puppet/ssl/ca/ca_crl.pem; ssl_client_certificate /var/lib/puppet/ssl/certs/ca.pem; ssl_ciphers SSLv2:-LOW:-EXPORT:RC4+RSA; ssl_prefer_server_ciphers on; ssl_verify_client optional; ssl_verify_depth 1; ssl_session_cache shared:SSL:128m; ssl_session_timeout 5m; } Puppet is picking up the correct settings from the files mentioned because config print command points to /etc/puppet [amisr1@bangvmpllDA02 puppet]$ sudo puppet config print | grep conf async_storeconfigs = false authconfig = /etc/puppet/namespaceauth.conf autosign = /etc/puppet/autosign.conf catalog_cache_terminus = store_configs confdir = /etc/puppet config = /etc/puppet/puppet.conf config_file_name = puppet.conf config_version = "" configprint = all configtimeout = 120 dblocation = /var/lib/puppet/state/clientconfigs.sqlite3 deviceconfig = /etc/puppet/device.conf fileserverconfig = /etc/puppet/fileserver.conf genconfig = false hiera_config = /etc/puppet/hiera.yaml localconfig = /var/lib/puppet/state/localconfig name = config rest_authconfig = /etc/puppet/auth.conf storeconfigs = true storeconfigs_backend = puppetdb tagmap = /etc/puppet/tagmail.conf thin_storeconfigs = false I checked the firewall rules on this VM; 80, 443, 8140, 3000 are allowed. Do I still have to tweak any specifics to auth.conf for getting this to work? Update I added verbose logging to the puppet master and restarted nginx; here's the additional info I see in logs Mon Dec 10 18:19:15 +0530 2012 Puppet (err): Could not resolve 10.209.47.31: no name for 10.209.47.31 Mon Dec 10 18:19:15 +0530 2012 access[/] (info): defaulting to no access for 10.209.47.31 Mon Dec 10 18:19:15 +0530 2012 Puppet (warning): Denying access: Forbidden request: 10.209.47.31(10.209.47.31) access to /file_metadata/plugins [find] at :111 Mon Dec 10 18:19:15 +0530 2012 Puppet (err): Forbidden request: 10.209.47.31(10.209.47.31) access to /file_metadata/plugins [find] at :111 10.209.47.31 - - [10/Dec/2012:18:19:15 +0530] "GET /production/file_metadata/plugins? HTTP/1.1" 403 93 "-" "Ruby" On the agent machine facter fqdn and hostname both return a fully qualified host name [amisr1@blramisr195602 ~]$ sudo facter fqdn blramisr195602.XXXXXXX.com I then updated the agent configuration to add dns_alt_names = 10.209.47.31 cleaned all certificates on master and agent and regenerated the certificates and signed them on master using the option --allow-dns-alt-names [amisr1@bangvmpllDA02 ~]$ sudo puppet cert sign blramisr195602.XXXXXX.com Error: CSR 'blramisr195602.XXXXXX.com' contains subject alternative names (DNS:10.209.47.31, DNS:blramisr195602.XXXXXX.com), which are disallowed. Use `puppet cert --allow-dns-alt-names sign blramisr195602.XXXXXX.com` to sign this request. [amisr1@bangvmpllDA02 ~]$ sudo puppet cert --allow-dns-alt-names sign blramisr195602.XXXXXX.com Signed certificate request for blramisr195602.XXXXXX.com Removing file Puppet::SSL::CertificateRequest blramisr195602.XXXXXX.com at '/var/lib/puppet/ssl/ca/requests/blramisr195602.XXXXXX.com.pem' however, that doesn't help either; I get same errors as before. Not sure why in the logs it shows comparing access rules by IP and not hostname. Is there any Nginx configuration to change this behavior?

    Read the article

  • How to stop a QDialog from executing while still in the __init__ statement(or immediatly after)?

    - by Jonathan
    I am wondering how I can go about stopping a dialog from opening if certain conditions are met in its __init__ statement. The following code tries to call the 'self.close()' function and it does, but (I'm assuming) since the dialog has not yet started its event loop, that it doesn't trigger the close event? So is there another way to close and/or stop the dialog from opening without triggering an event? Example code: from PyQt4 import QtCore, QtGui class dlg_closeInit(QtGui.QDialog): ''' Close the dialog if a certain condition is met in the __init__ statement ''' def __init__(self): QtGui.QDialog.__init__(self) self.txt_mytext = QtGui.QLineEdit('some text') self.btn_accept = QtGui.QPushButton('Accept') self.myLayout = QtGui.QVBoxLayout(self) self.myLayout.addWidget(self.txt_mytext) self.myLayout.addWidget(self.btn_accept) self.setLayout(self.myLayout) # Connect the button self.connect(self.btn_accept,QtCore.SIGNAL('clicked()'), self.on_accept) self.close() def on_accept(self): # Get the data... self.mydata = self.txt_mytext.text() self.accept() def get_data(self): return self.mydata def closeEvent(self, event): print 'Closing...' if __name__ == '__main__': import sys app = QtGui.QApplication(sys.argv) dialog = dlg_closeInit() if dialog.exec_(): print dialog.get_data() else: print "Failed"

    Read the article

  • Ubuntu Natty: 32-bit userland, 64-bit kernel?

    - by dsimcha
    I'm trying to manually install a 64-bit kernel for 32-bit Ubuntu. I have my reasons for doing so, but they're too complicated to explain here. Prior to Natty, this worked fine. Now, on Natty, I get the following error message when I try doing it the same way: dsimcha@dsimcha-laptop:~$ sudo dpkg -i --force-architecture linux-image-2.6.38-8-server_2.6.38-8.42_amd64.deb [sudo] password for dsimcha: dpkg: error processing linux-image-2.6.38-8-server_2.6.38-8.42_amd64.deb (--install): cannot access archive: No such file or directory Errors were encountered while processing: linux-image-2.6.38-8-server_2.6.38-8.42_amd64.deb dsimcha@dsimcha-laptop:~$ cd Downloads/ dsimcha@dsimcha-laptop:~/Downloads$ sudo dpkg -i --force-architecture linux-image-2.6.38-8-server_2.6.38-8.42_amd64.deb dpkg: warning: overriding problem because --force enabled: package architecture (amd64) does not match system (i386) (Reading database ... 159153 files and directories currently installed.) Preparing to replace linux-image-2.6.38-8-server:amd64 2.6.38-8.42 (using linux-image-2.6.38-8-server_2.6.38-8.42_amd64.deb) ... Done. Unpacking replacement linux-image-2.6.38-8-server:amd64 ... Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 2.6.38-8-server /boot/vmlinuz-2.6.38-8-server run-parts: executing /etc/kernel/postrm.d/zz-update-grub 2.6.38-8-server /boot/vmlinuz-2.6.38-8-server dpkg: dependency problems prevent configuration of linux-image-2.6.38-8-server:amd64: linux-image-2.6.38-8-server:amd64 depends on initramfs-tools (>= 0.36ubuntu6). linux-image-2.6.38-8-server:amd64 depends on coreutils | fileutils (>= 4.0); however: Package coreutils:amd64 is not installed. linux-image-2.6.38-8-server:amd64 depends on module-init-tools (>= 3.3-pre11-4ubuntu3); however: linux-image-2.6.38-8-server:amd64 depends on wireless-crda; however: dpkg: error processing linux-image-2.6.38-8-server:amd64 (--install): dependency problems - leaving unconfigured Errors were encountered while processing: linux-image-2.6.38-8-server:amd64 When I try the dependencies manually, I get, for example: dsimcha@dsimcha-laptop:~/Downloads$ sudo dpkg -i --force-architecture coreutils_8.5-1ubuntu6_amd64.deb dpkg: warning: overriding problem because --force enabled: package architecture (amd64) does not match system (i386) dpkg: error processing coreutils_8.5-1ubuntu6_amd64.deb (--install): coreutils:amd64 8.5-1ubuntu6 (Multi-Arch: no) is not co-installable with coreutils:i386 8.5-1ubuntu6 (Multi-Arch: no) which is currently installed Errors were encountered while processing: coreutils_8.5-1ubuntu6_amd64.deb Has anyone had any success installing 64-bit kernels on 32-bit Natty? If so, how can this be done?

    Read the article

  • Access to SQL Server when administrator account deleted

    - by Shiraz Bhaiji
    An interesting situation here. We have a database server, used for testing only, where someone went in and deleted the administrator login. Since this is a test server the was no other admin level login on the server. Is there a way to get access to the server again without reinstalling SQL Server? We do not need the data in the databases, these are droped and recreated everytime the tests are run.

    Read the article

  • Hyper-V and Drobo Pro

    - by Jon Rauschenberger
    I'm considering getting a fully loaded Drobo Pro and using it to store VHDs that would run our on a pair of Hyper-V host machines. The host machines would connect to the Drobo Pro via iScsi. Anyone have experience with the Drobo Pro and Hyper-V? My main questions/concern is about speed - is the Drobo fast enough to handle say a dozen VHDs all running concurrently? jon

    Read the article

  • IIS7 + WCF + Silverlight problems

    - by Eanna
    Hey, I've been building a silverlight application and a WCF service for a while now and recently tried to host them in IIS7. I installed IIS7 on Windows Server 2008 R2 and added these two application to my default website. I am having a number of problems so im hoping one of you can help out... 1) The silverlight and WCF service applications do not work with pass-through authentication. I need to "connect as" the administrator server account when setting up the application. I read online that you should only need to use the "connect as" field when you are connecting to another computer. If i dont supply the admin credentials i get this error. Do i have to set up permissions somewhere else? HTTP Error 500.19 - Internal Server Error The requested page cannot be accessed because the related configuration data for the page is invalid. Detailed Error Information Module IIS Web Core Notification BeginRequest Handler Not yet determined Error Code 0x80070005 Config Error Cannot read configuration file due to insufficient permissions Config File \?\C:\Users\Administrator\Documents\My Dropbox\Research Masters\Project\WCFService\Website\web.config Requested URL http:://localhost:80/WCFService/Service.svc Physical Path C:\Users\Administrator\Documents\My Dropbox\Research Masters\Project\WCFService\Website\Service.svc Logon Method Not yet determined Logon User Not yet determined Config Source -1: 0: Links and More Information This error occurs when there is a problem reading the configuration file for the Web server or Web application. In some cases, the event logs may contain more information about what caused this error. 2) Visual studio generated 2 webpages to run my silverlight application (.html and .aspx). When I am running the silverlight application (connected as admin) I can navigate to the .html page, no problem. When I try to open the .aspx file i get the following error Server Error in '/Platform' Application. Access is denied. Description: An error occurred while accessing the resources required to serve this request. You might not have permission to view the requested resources. Error message 401.3: You do not have permission to view this directory or page using the credentials you supplied (access denied due to Access Control Lists). Ask the Web server's administrator to give you access to 'C:\Users\Administrator\Documents\My Dropbox\Research Masters\Project\Platform\Website\PlatformTestPage.aspx'. Version Information: Microsoft .NET Framework Version:4.0.30128; ASP.NET Version:4.0.30128.1 3) The WCF service runs fine (again, connected as admin) until i restart the server. When i try to run the WCF service after a reboot, the mysql assembly seems to be missing from the solution. If i just rebuilt the solution and run the service again... it works (until next restart). Whats causing this error? Solution here - http://tinypic.com/view.php?pic=5yasqx&s=5 Server Error in '/WCFService' Application. Could not load file or assembly 'MySql.Data, Version=6.2.2.0, Culture=neutral, PublicKeyToken=c5687fc88969c44d' or one of its dependencies. Access is denied. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.IO.FileLoadException: Could not load file or assembly 'MySql.Data, Version=6.2.2.0, Culture=neutral, PublicKeyToken=c5687fc88969c44d' or one of its dependencies. Access is denied. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Assembly Load Trace: The following information can be helpful to determine why the assembly 'MySql.Data, Version=6.2.2.0, Culture=neutral, PublicKeyToken=c5687fc88969c44d' could not be loaded. WRN: Assembly binding logging is turned OFF. To enable assembly bind failure logging, set the registry value [HKLM\Software\Microsoft\Fusion!EnableLog] (DWORD) to 1. Note: There is some performance penalty associated with assembly bind failure logging. To turn this feature off, remove the registry value [HKLM\Software\Microsoft\Fusion!EnableLog]. Stack Trace: [FileLoadException: Could not load file or assembly 'MySql.Data, Version=6.2.2.0, Culture=neutral, PublicKeyToken=c5687fc88969c44d' or one of its dependencies. Access is denied.] System.Reflection.RuntimeAssembly._nLoad(AssemblyName fileName, String codeBase, Evidence assemblySecurity, RuntimeAssembly locationHint, StackCrawlMark& stackMark, Boolean throwOnFileNotFound, Boolean forIntrospection, Boolean suppressSecurityChecks) +0 System.Reflection.RuntimeAssembly.InternalLoadAssemblyName(AssemblyName assemblyRef, Evidence assemblySecurity, StackCrawlMark& stackMark, Boolean forIntrospection, Boolean suppressSecurityChecks) +567 System.Reflection.RuntimeAssembly.InternalLoad(String assemblyString, Evidence assemblySecurity, StackCrawlMark& stackMark, Boolean forIntrospection) +192 System.Reflection.Assembly.Load(String assemblyString) +35 System.ServiceModel.Activation.ServiceHostFactory.CreateServiceHost(String constructorString, Uri[] baseAddresses) +243 System.ServiceModel.HostingManager.CreateService(String normalizedVirtualPath) +1423 System.ServiceModel.HostingManager.ActivateService(String normalizedVirtualPath) +50 System.ServiceModel.HostingManager.EnsureServiceAvailable(String normalizedVirtualPath) +1132 [ServiceActivationException: The service '/WCFService/Service.svc' cannot be activated due to an exception during compilation. The exception message is: Could not load file or assembly 'MySql.Data, Version=6.2.2.0, Culture=neutral, PublicKeyToken=c5687fc88969c44d' or one of its dependencies. Access is denied..] System.Runtime.AsyncResult.End(IAsyncResult result) +889824 System.ServiceModel.Activation.HostedHttpRequestAsyncResult.End(IAsyncResult result) +179150 System.Web.AsyncEventExecutionStep.OnAsyncEventCompletion(IAsyncResult ar) +107 Version Information: Microsoft .NET Framework Version:4.0.30128; ASP.NET Version:4.0.30128.1 Thats about it, hope someone reads this message, I wasted most of the weekend trying to fix these problems on my own... thanks

    Read the article

  • Is post-sudden-power-loss filesystem corruption on an SSD drive's ext3 partition "expected behavior"?

    - by Jeremy Friesner
    My company makes an embedded Debian Linux device that boots from an ext3 partition on an internal SSD drive. Because the device is an embedded "black box", it is usually shut down the rude way, by simply cutting power to the device via an external switch. This is normally okay, as ext3's journalling keeps things in order, so other than the occasional loss of part of a log file, things keep chugging along fine. However, we've recently seen a number of units where after a number of hard-power-cycles the ext3 partition starts to develop structural issues -- in particular, we run e2fsck on the ext3 partition and it finds a number of issues like those shown in the output listing at the bottom of this Question. Running e2fsck until it stops reporting errors (or reformatting the partition) clears the issues. My question is... what are the implications of seeing problems like this on an ext3/SSD system that has been subjected to lots of sudden/unexpected shutdowns? My feeling is that this might be a sign of a software or hardware problem in our system, since my understanding is that (barring a bug or hardware problem) ext3's journalling feature is supposed to prevent these sorts of filesystem-integrity errors. (Note: I understand that user-data is not journalled and so munged/missing/truncated user-files can happen; I'm specifically talking here about filesystem-metadata errors like those shown below) My co-worker, on the other hand, says that this is known/expected behavior because SSD controllers sometimes re-order write commands and that can cause the ext3 journal to get confused. In particular, he believes that even given normally functioning hardware and bug-free software, the ext3 journal only makes filesystem corruption less likely, not impossible, so we should not be surprised to see problems like this from time to time. Which of us is right? Embedded-PC-failsafe:~# ls Embedded-PC-failsafe:~# umount /mnt/unionfs Embedded-PC-failsafe:~# e2fsck /dev/sda3 e2fsck 1.41.3 (12-Oct-2008) embeddedrootwrite contains a file system with errors, check forced. Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Invalid inode number for '.' in directory inode 46948. Fix<y>? yes Directory inode 46948, block 0, offset 12: directory corrupted Salvage<y>? yes Entry 'status_2012-11-26_14h13m41.csv' in /var/log/status_logs (46956) has deleted/unused inode 47075. Clear<y>? yes Entry 'status_2012-11-26_10h42m58.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47076. Clear<y>? yes Entry 'status_2012-11-26_11h29m41.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47080. Clear<y>? yes Entry 'status_2012-11-26_11h42m13.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47081. Clear<y>? yes Entry 'status_2012-11-26_12h07m17.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47083. Clear<y>? yes Entry 'status_2012-11-26_12h14m53.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47085. Clear<y>? yes Entry 'status_2012-11-26_15h06m49.csv' in /var/log/status_logs (46956) has deleted/unused inode 47088. Clear<y>? yes Entry 'status_2012-11-20_14h50m09.csv' in /var/log/status_logs (46956) has deleted/unused inode 47073. Clear<y>? yes Entry 'status_2012-11-20_14h55m32.csv' in /var/log/status_logs (46956) has deleted/unused inode 47074. Clear<y>? yes Entry 'status_2012-11-26_11h04m36.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47078. Clear<y>? yes Entry 'status_2012-11-26_11h54m45.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47082. Clear<y>? yes Entry 'status_2012-11-26_12h12m20.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47084. Clear<y>? yes Entry 'status_2012-11-26_12h33m52.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47086. Clear<y>? yes Entry 'status_2012-11-26_10h51m59.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47077. Clear<y>? yes Entry 'status_2012-11-26_11h17m09.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47079. Clear<y>? yes Entry 'status_2012-11-26_12h54m11.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47087. Clear<y>? yes Pass 3: Checking directory connectivity '..' in /etc/network/run (46948) is <The NULL inode> (0), should be /etc/network (46953). Fix<y>? yes Couldn't fix parent of inode 46948: Couldn't find parent directory entry Pass 4: Checking reference counts Unattached inode 46945 Connect to /lost+found<y>? yes Inode 46945 ref count is 2, should be 1. Fix<y>? yes Inode 46953 ref count is 5, should be 4. Fix<y>? yes Pass 5: Checking group summary information Block bitmap differences: -(208264--208266) -(210062--210068) -(211343--211491) -(213241--213250) -(213344--213393) -213397 -(213457--213463) -(213516--213521) -(213628--213655) -(213683--213688) -(213709--213728) -(215265--215300) -(215346--215365) -(221541--221551) -(221696--221704) -227517 Fix<y>? yes Free blocks count wrong for group #6 (17247, counted=17611). Fix<y>? yes Free blocks count wrong (161691, counted=162055). Fix<y>? yes Inode bitmap differences: +(47089--47090) +47093 +47095 +(47097--47099) +(47101--47104) -(47219--47220) -47222 -47224 -47228 -47231 -(47347--47348) -47350 -47352 -47356 -47359 -(47457--47488) -47985 -47996 -(47999--48000) -48017 -(48027--48028) -(48030--48032) -48049 -(48059--48060) -(48062--48064) -48081 -(48091--48092) -(48094--48096) Fix<y>? yes Free inodes count wrong for group #6 (7608, counted=7624). Fix<y>? yes Free inodes count wrong (61919, counted=61935). Fix<y>? yes embeddedrootwrite: ***** FILE SYSTEM WAS MODIFIED ***** embeddedrootwrite: ********** WARNING: Filesystem still has errors ********** embeddedrootwrite: 657/62592 files (24.4% non-contiguous), 87882/249937 blocks Embedded-PC-failsafe:~# Embedded-PC-failsafe:~# e2fsck /dev/sda3 e2fsck 1.41.3 (12-Oct-2008) embeddedrootwrite contains a file system with errors, check forced. Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Directory entry for '.' in ... (46948) is big. Split<y>? yes Missing '..' in directory inode 46948. Fix<y>? yes Setting filetype for entry '..' in ... (46948) to 2. Pass 3: Checking directory connectivity '..' in /etc/network/run (46948) is <The NULL inode> (0), should be /etc/network (46953). Fix<y>? yes Pass 4: Checking reference counts Inode 2 ref count is 12, should be 13. Fix<y>? yes Pass 5: Checking group summary information embeddedrootwrite: ***** FILE SYSTEM WAS MODIFIED ***** embeddedrootwrite: 657/62592 files (24.4% non-contiguous), 87882/249937 blocks Embedded-PC-failsafe:~# Embedded-PC-failsafe:~# e2fsck /dev/sda3 e2fsck 1.41.3 (12-Oct-2008) embeddedrootwrite: clean, 657/62592 files, 87882/249937 blocks

    Read the article

  • Running PCIE card with the on board video card as well in Windows 7

    - by Russ Johnson
    I have an emachines Windows7 PC. I am trying to run the on board video card along with the dual DVI card I installed in the PCIE slot. In CMOS it shows the on board card as disabled and it will not let me enable it? Its greyed out so I cant even highlight it to change anything. I have done this before in XP on a few different machines so I know its possible, Any idea?

    Read the article

  • HBA card status

    - by Alex
    Hello, Is it possible somehow to get status of a HBA card using PowerShell or any other API instead of logging to a server and run "powermt display path"? Thanks.

    Read the article

  • mplayer audio desync

    - by geek
    I have and avi file and an ac3 file that contains an alternate audio stream. I run mplayer like: mplayer -audiofile foo.ac3 bar.avi mplayer takes the audio stream from the ac3 file as expected, but when I try to scroll the video using arrows or pgup/pgdown keys, the audio gets desynced: mplayer just starts playing the audio stream from the beginning. Do I have to pass any additional command line arguments in order to make it scroll properly without desyncing audio?

    Read the article

  • Stand alone or free application to backup ADAM / AD LDS database files

    - by Darqer
    Do you know any small standalone and free tool, that can be run in console, to backup / restore ADAM / AD LDS database files (like adamntds.dit, edbres00001.jrs etc.). I tried to stop ADAM service and copy / paste these files to other location but afterwards I was unable to restore ADAM from these files. I know I could use on ws 2003 some backup tool that was provided by microsoft but it seems to be unavailable on ws 2008.

    Read the article

  • Banking applications

    - by Rohit
    Is there still scope left for a banking software? Almost all the banks now run core-banking solution, still I could see new companies coming with their banking solutions. Is there still scope left for the new comers in this segment?

    Read the article

  • Detach current session and attach to another session, done with one script, can I?

    - by Jimm Chen
    After reading the vague official doc of GNU screen( http://www.gnu.org/software/screen/manual/screen.html ) and asking quite some questions at this site. I still cannot figure out how to accomplish such a task with a shell script. This task costs some words to describe. Assume I'm using PuTTY to telnet into my Linux server. ?STEP 1? Launch 2 telnet connections . From putty window 1 (PTWIN1),telnet into Linux Bash shell, execute screen -RR to launch a screen session, and get session name 21385.pts-4.linux-ic37 . From putty window 2 (PTWIN2), do that same as in PTWIN1, but this time, I get session name 22041.pts-9.linux-ic37 . Now, we have two screen sessions running simultaneously. We can check this: $ screen -ls There are screens on: 22041.pts-9.linux-ic37 (Attached) 21385.pts-4.linux-ic37 (Attached) 2 Sockets in /var/run/uscreens/S-chj2. ?STEP 2? Assume that for some reason, PTWIN1's TCP connection is lost abnormally(but server doesn't know that), and an urgent work is pending on session 21385 and I want to quickly regain control of it. Fortunately, we know the 21385 session is still there, so, I want to have PTWIN2 attach to session 21385. Because I hate to remember the esoteric screen option all the time, so I decide to write a script called sttach. I hope that sttach 21385.pts-4.linux-ic37 can let me attach to session 21385(for PTWIN2). Now, let's say sttach works well and I take control of 21385 on PTWIN2. ?STEP 3? Some minutes later. I want to go back to work on session 22041. Here, please allow me to have PTWIN2 remain associated with session 21385. What I would like to do is to launch another putty window (PTWIN3), telnet into server, and execute sttach 22041.pts-9.linux-ic37 in hope that I can resume session 22041 on PTWIN3 . You can see the benefit of sttach: as long as I know the target session name, I can call it to have my PuTTY window switch to that session, regardless whether the target session is "(Attached)" or "(Detached)", and regardless whether the running context is inside a screen session or not. Now the question: How to write the (Bash) script sttach? I mean, run screen with appropriate options in sttach to accomplish the goal. Waiting for your kind answer. Thank you. My previous questions regarding GNU screen: GNU screen, how to get current sessionname programmatically Is it possible to change GNU screen session name after created? How do I know I'm running inside a linux "screen" or not? My env: openSUSE Linux 11.3, GNU screen 4.00.03 (FAU) 23-Oct-06

    Read the article

  • DKIM error: dkim=neutral (bad version) header.i=

    - by GBC
    Ive been struggling the last couple of hours with setting up DKIM on my Postfix/CentOS 5.3 server. It finally sends and signs the emails, but apparently Google still does not like it. The errors I'm getting are: dkim=neutral (bad version) [email protected] from googles "show original" interface. This is what my DKIM-signature header look like: v=1; a=rsa-sha1; c=simple/simple; d=mydomain.com.au; s=default; t=1267326852; bh=0wHpkjkf7ZEiP2VZXAse+46PC1c=; h=Date:From:Message-Id:To:Subject; b=IFBaqfXmFjEojWXI/WQk4OzqglNjBWYk3jlFC8sHLLRAcADj6ScX3bzd+No7zos6i KppG9ifwYmvrudgEF+n1VviBnel7vcVT6dg5cxOTu7y31kUApR59dRU5nPR/to0E9l dXMaBoYPG8edyiM+soXo7rYNtlzk+0wd5glgFP1I= Very appreciative of any suggestions as to how I can solve this problem! Btw, here is exactly how I installed dkim-milter in CentOS 5.3 for postfix, if anyone is interested (based on this guide): mkdir dkim-milter cd dkim-milter wget http://www.topdog-software.com/oss/dkim-milter/dkim-milter-2.8.3-1.x86_64.rpm ======S====== Newest version: http://www.topdog-software.com/oss/dkim-milter/ ======E====== rpm -Uvh dkim-milter-2.8.3-1.x86_64.rpm /usr/bin/dkim-genkey -r -d mydomain.com.au ======S====== add contents of default.txt to DNS as TXT _ssp._domainkey TXT dkim=unknown _adsp._domainkey TXT dkim=unknown default._domainkey TXT v=DKIM1; g=*; k=rsa; p=MIGfMA0GCSqGSIb3DQEBAQUAA4GWETBNiQKBgQC5KT1eN2lqCRQGDX+20I4liM2mktrtjWkV6mW9WX7q46cZAYgNrus53vgfl2z1Y/95mBv6Bx9WOS56OAVBQw62+ksXPT5cRUAUN9GkENPdOoPdpvrU1KdAMW5c3zmGOvEOa4jAlB4/wYTV5RkLq/1XLxXfTKNy58v+CKETLQS/eQIDAQAB ======E====== mv default.private default mkdir /etc/mail/dkim/keys/mydomain.com.au mv default /etc/mail/dkim/keys/mydomain.com.au chmod 600 /etc/mail/dkim/keys/mydomain.com.au/default chown dkim-milt.dkim-milt /etc/mail/dkim/keys/mydomain.com.au/default vim /etc/dkim-filter.conf ======S====== ADSPDiscard yes ADSPNoSuchDomain yes AllowSHA1Only no AlwaysAddARHeader no AutoRestart yes AutoRestartRate 10/1h BaseDirectory /var/run/dkim-milter Canonicalization simple/simple Domain mydomain.com.au #add all your domains here and seperate them with comma ExternalIgnoreList /etc/mail/dkim/trusted-hosts InternalHosts /etc/mail/dkim/trusted-hosts KeyList /etc/mail/dkim/keylist LocalADSP /etc/mail/dkim/local-adsp-rules Mode sv MTA MSA On-Default reject On-BadSignature reject On-DNSError tempfail On-InternalError accept On-NoSignature accept On-Security discard PidFile /var/run/dkim-milter/dkim-milter.pid QueryCache yes RemoveOldSignatures yes Selector default SignatureAlgorithm rsa-sha1 Socket inet:20209@localhost Syslog yes SyslogSuccess yes TemporaryDirectory /var/tmp UMask 022 UserID dkim-milt:dkim-milt X-Header yes ======E====== vim /etc/mail/dkim/keylist ======S====== *@mydomain.com.au:mydomain.com.au:/etc/mail/dkim/keys/mydomain.com.au/default ======E====== vim /etc/postfix/main.cf ======S====== Add: smtpd_milters = inet:localhost:20209 non_smtpd_milters = inet:localhost:20209 milter_protocol = 2 milter_default_action = accept ======E====== vim /etc/mail/dkim/trusted-hosts ======S====== localhost 127.0.0.1 ======E====== /etc/mail/local-host-names ======S====== localhost 127.0.0.1 ======E====== /sbin/chkconfig dkim-milter on /etc/init.d/dkim-milter start /etc/init.d/postfix restart

    Read the article

  • Good IE6,IE7 simulator applications?

    - by snitzr
    I have IE8 installed, I would like to test websites in IE6 and IE7. I cannot use Adobe's BrowserLab to test because the website needing tests contains dynamic content. I cannot find a good application to simulate IE6 or 7. Is one available/recommended? Or can I install and run IE6 through 8 on my machine at the same time?

    Read the article

  • What's an efficient way of calculating the nearest point?

    - by Griffo
    I have objects with location data stored in Core Data, I would like to be able to fetch and display just the nearest point to the current location. I'm aware there are formulas which will calculate the distance from current lat/long to a stored lat/long, but I'm curious about the best way to perform this for a set of 1000+ points stored in Core Data. I know I could just return the points from Core Data to an array and then loop through that looking for the min value for distance between the points but I'd imagine there's a more efficient method, possibly leveraging Core Data in some way. Any insight would be appreciated. EDIT: I don't know how I missed this on my initial search but this SO question suggests just iterating through an array of Core Data objects but limiting the array size with a bounding box based on the current location. Is this the best I can do?

    Read the article

  • What Issue Tracking System to select?

    - by Mikee
    What Issue Tracking Sytem is the most appropriate for fast, big, multilingual and international websites? The system has to handle both technical and content/editorial issues. What's the size and type of your site do you run? Whart System are you using for the keeping it state of the art? Thanks a lot for sharing your good or bad experience.

    Read the article

  • Puppet configuration file on Windows

    - by Jeff Storey
    I'm running puppet on windows as an admin (testing on windows 7, even though it is not officially supported). When I install puppet following the windows installation instructions, no puppet.conf file is generated in C:/ProgramData/PuppetLabs/puppet/etc. I can run puppet agent --genconfig to create one, but regardless of what values I put in there, it doesn't seem to respect them. Is this just a puppet/windows issue? Or am I doing something wrong?

    Read the article

  • Can I extract a specific folder using tar to another folder?

    - by PeanutsMonkey
    I am new to the world of Linux and seem to have run into a stumbling block. I know I can extract a specifc archive using the command tar xvfz archivename.tar.gz sampledir/ however how can I extract sampledir/ to testdir/ rather than the path that the archive is in e.g.currently the archive is in the path /tmp/archivename.tar.gz and I would like to extract sampledir to testdir which is in the path /tmp/testdir.

    Read the article

  • How to get an array from a database in Rails3 even when there's only one record?

    - by yuval
    In Rails3, I have the following line: @messages = Message.where("recipient_deleted = ?", false).find_by_recipient_id(@user.id) In my view, I loop through @messages and print out each message, as such: <% for message in @messages %> <%= message.sender_id %> <%= message.created_at %> <%= message.body %> <% end %> This works flawlessly when there are several messages. The problem is that when I have one message, I get an error thrown at me: undefined methodeach'` How do I force rails to always return an array of messages even if there's only one message so that each always works? Thanks!

    Read the article

  • Suggestions for Scheduled Tasks to call OSQL without hard-coding cleartext password

    - by Ian Boyd
    Can anyone think of any techniques where i can have a Windows scheduled task run OSQL, but not have to pass the clear-text password with cleartext password being in the clear? E.g.: >osql -U iboyd -P BabyBatterStapleCorrect Assumption: No Windows Authentication (since it's not an option) i was hoping there was a >OSQL -encryptPassword "BabyBatterStapleCorrect" > > OSQL > Encrypted password: WWVzIGkgd2FudCB0byByYXBlIGJhYmllcy4gQmlnIHdob29wLiBXYW5uYSBmaWdodCBhYm91dCBpdD8= And then i could call OSQL with: >osql -U ian -P WWVzIGkgd2FudCB0byByYXBlIGJhYmllcy4gQmlnIHdob29wLiBXYW5uYSBmaWdodCBhYm91dCBpdD8= But that's not something Microsoft implemented.

    Read the article

  • Network is going down once per day

    - by Charly
    Once per day the network on eth0 is going down and we need to do sudo ifdown eth0; sudo ifup eth0 to get the network up. Here is the syslog: Feb 11 12:48:01 www-tech-1 dhclient: DHCPREQUEST of address> on eth0 to 131.121.113.228 port 67 Feb 11 12:52:35 www-tech-1 dhclient: DHCPREQUEST of address> on eth0 to 131.121.113.228 port 67 Feb 11 12:56:23 www-tech-1 dhclient: DHCPREQUEST of address> on eth0 to 131.121.113.228 port 67 Feb 11 13:00:28 www-tech-1 dhclient: DHCPREQUEST of address> on eth0 to 131.121.113.228 port 67 Feb 11 13:04:29 www-tech-1 dhclient: DHCPREQUEST of address> on eth0 to 131.121.113.228 port 67 Feb 11 13:09:16 www-tech-1 dhclient: DHCPREQUEST of address> on eth0 to 131.121.113.228 port 67 Feb 11 13:13:53 www-tech-1 dhclient: DHCPREQUEST of address> on eth0 to 131.121.113.228 port 67 Feb 11 13:18:16 www-tech-1 dhclient: DHCPREQUEST of address> on eth0 to 131.121.113.228 port 67 Feb 11 13:22:25 www-tech-1 dhclient: DHCPREQUEST of address> on eth0 to 131.121.113.228 port 67 Feb 11 13:26:52 www-tech-1 dhclient: DHCPREQUEST of address> on eth0 to 131.121.113.228 port 67 Feb 11 13:30:44 www-tech-1 dhclient: DHCPREQUEST of address> on eth0 to 131.121.113.228 port 67 Feb 11 13:31:49 www-tech-1 dhclient: There is already a pid file /var/run/dhclient.eth0.pid with pid 3198 Feb 11 13:31:49 www-tech-1 dhclient: Listening on LPF/eth0/00:e0:81:49:fc:e0 Feb 11 13:31:49 www-tech-1 dhclient: Sending on LPF/eth0/00:e0:81:49:fc:e0 Feb 11 13:31:49 www-tech-1 dhclient: DHCPRELEASE on eth0 to 131.121.113.228 port 67 Feb 11 13:31:49 www-tech-1 dhclient: There is already a pid file /var/run/dhclient.eth0.pid with pid 134519072 Feb 11 13:31:50 www-tech-1 dhclient: Listening on LPF/eth0/00:e0:81:49:fc:e0 Feb 11 13:31:50 www-tech-1 dhclient: Sending on LPF/eth0/00:e0:81:49:fc:e0 Feb 11 13:31:52 www-tech-1 dhclient: DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 8 Feb 11 13:31:52 www-tech-1 dhclient: DHCPREQUEST of 131.121.14.17 on eth0 to 255.255.255.255 port 67 Feb 11 13:31:53 www-tech-1 kernel: [265383.991682] eth0: no IPv6 routers present Please check the last portion of this syslog. Can anybody help me?

    Read the article

< Previous Page | 694 695 696 697 698 699 700 701 702 703 704 705  | Next Page >