Search Results

Search found 28627 results on 1146 pages for 'case statement'.

Page 531/1146 | < Previous Page | 527 528 529 530 531 532 533 534 535 536 537 538  | Next Page >

  • Fixing a typo in machine name

    - by justSteve
    When i installed windows i had a typo in the machine name that i corrected from the system's 'Computer Name/Domain Changes' - the workstation is a member of a workgroup not a domain. From everything i can see the renamed machine name is correct. Shift gears.... I'm importing SQL logins from my remote server to this, my development workstation and have used the script presented here - a script that generates a CREATE statement for each login found. While I was preparing to run this script's output (from the remote box) i needed to change the domain name from the remote to my local's name - so i ran the same script locally (in order to see what SQL things my domain name is. SQL has the original machine name - the one with the typo. However, the scripts are tossing errors if i try to create logins with that identifier. CREATE LOGIN [Setve\Admin] FROM WINDOWS WITH DEFAULT_DATABASE = [master] But works correctly if i use the updated machine name: CREATE LOGIN [Steve\Admin] FROM WINDOWS WITH DEFAULT_DATABASE = [master] So the problem is: do i have a problem i need to solve? Somewhere, deep in the guts of SQL Server, it has record of a Domain name that does not exist. Should i find and fix that discrepancy? thx

    Read the article

  • How to find the next generated value for a auto-increment column?

    - by Tim Büthe
    I face some trouble with IBM DB2's auto-increment columns. At first, all my columns were defined as GENERATED ALWAYS, but since I had trouble with this when using the "db2 import ..." command, I changed them to GENERATED BY DEFAULT. This is necessary, sinceI need the IDs to be consistent, because other tables reference them. So using "db2 import ... modified by identityignore ..." isn't an option. When I now import data, the IDs are inserted correctly, but everytime I do this, I have to remember to set a new start for the auto-increment column by getting the highest Id+1 and alter the column like this: SELECT MAX(mycolumn)+ 1 FROM mytable; ALTER TABLE mytable ALTER COLUMN mycolumn RESTART WITH <above_result>; If I forget this, an Insert-Statement will fail with an duplicate PK error, since the auto-increment column is the primary key. So my question is: Is there a way to find the next value for an auto-increment column, so I could write Statements that would check, if this value is less then the SELECT MAX and needs to be set? Or: Isn't this whole thing as complicated as it seems to me? Could I somehow import data, preserving the IDs and have the auto-increment column still working as expected?

    Read the article

  • Why am I not able to create a backup plan for TFS?

    - by noocyte
    I am trying to create a backup plan using the TFS Power Tools but I keep running into this error message: I have checked that the account has Full Control on the share, I can edit, create and delete files there. From the log: [Info @07:15:00.403] Starting creating backup test validation [Error @07:15:00.700] Microsoft.SqlServer.Management.Smo.FailedOperationException: Backup failed for Server 'WMSI003714N\SqlExpress'. ---> Microsoft.SqlServer.Management.Common.ExecutionFailureException: An exception occurred while executing a Transact-SQL statement or batch. ---> System.Data.SqlClient.SqlException: Cannot open backup device '\\wmsi003714n\sql dump\Tfs_Configuration_20100910091500.bak'. Operating system error 5(failed to retrieve text for this error. Reason: 1815). BACKUP DATABASE is terminating abnormally. at Microsoft.SqlServer.Management.Common.ConnectionManager.ExecuteTSql(ExecuteTSqlAction action, Object execObject, DataSet fillDataSet, Boolean catchException) at Microsoft.SqlServer.Management.Common.ServerConnection.ExecuteNonQuery(String sqlCommand, ExecutionTypes executionType) --- End of inner exception stack trace --- at Microsoft.SqlServer.Management.Common.ServerConnection.ExecuteNonQuery(String sqlCommand, ExecutionTypes executionType) at Microsoft.SqlServer.Management.Common.ServerConnection.ExecuteNonQuery(StringCollection sqlCommands, ExecutionTypes executionType) at Microsoft.SqlServer.Management.Smo.ExecutionManager.ExecuteNonQuery(StringCollection queries) at Microsoft.SqlServer.Management.Smo.BackupRestoreBase.ExecuteSql(Server server, StringCollection queries) at Microsoft.SqlServer.Management.Smo.Backup.SqlBackup(Server srv) --- End of inner exception stack trace --- at Microsoft.SqlServer.Management.Smo.Backup.SqlBackup(Server srv) at Microsoft.TeamFoundation.PowerTools.Admin.Helpers.BackupFactory.TestBackupCreation(String path) [Error @07:15:00.731] !Verify Error!: Account GROUPINFRA\SA-NO-TeamService failed to create backups using path \\wmsi003714n\sql dump [Info @07:15:00.731] "Verify: Grant Backup Plan Permissions\Root\VerifyDummyBackupCreation(VerifyTestBackupCreatedSuccessfully): Exiting Verification with state Completed and result Error" Any ideas?

    Read the article

  • mysql_query missing during installation

    - by Arsenal
    Hi, I'm trying to install the pdo_mysql extension... I managed to install it succesfully, but ever since I upgraded mysql to 5.1.34 (using rpm packages) it seems to have gone down so I tried to reinstall it. However it seems to crash on ./configure as it gives 'mysql_query not found' error: configure:3961: checking for mysql_query in -lmysqlclient configure:3991: gcc -o conftest -g -O2 -I/usr/local/include/php -Wl,- rpath,/usr/lib/mysql -L/usr/lib/mysql -lmysqlclient -lz -lcrypt -lnsl -lm -lmygcc conftest.c -lmysqlclient -rdynamic -L/usr/lib/mysql -lmysqlclient -lz -lcrypt -lnsl -lm - lmygcc >&5 /usr/bin/ld: skipping incompatible /usr/lib/mysql/libmysqlclient.a when searching for -lmysqlclient /usr/bin/ld: skipping incompatible /usr/lib/mysql/libmysqlclient.a when searching for -lmysqlclient /usr/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../../libmysqlclient.so when searching for -lmysqlclient /usr/bin/ld: skipping incompatible /usr/lib/libmysqlclient.so when searching for -lmysqlclient /usr/bin/ld: cannot find -lmysqlclient collect2: ld returned 1 exit status configure:3997: $? = 1 configure: failed program was: | /* confdefs.h. */ ... In that file there seems to be a mysql_query(); statement. I'm pretty sure mysql_query works however since all of my websites are running normally. However the current setup is a mess (previous students kind of messed it up) and there are a whole lot of libmysqclient's in /etc: libmysqlclient.so.10.0.0 libmysqlclient.so.12.0.0 libmysqlclient.so.14.0.0 libmysqlclient.so.15.0.0 libmysqlclient.so.16.0.0 libmysqlclient_r.so.10.0.0 libmysqlclient_r.so.12.0.0 libmysqlclient_r.so.14.0.0 libmysqlclient_r.so.15.0.0 libmysqlclient_r.so.16.0.0 And just as much symlinks. Does anyone know how to get this right? Many thanks! (oh, and no pecl install pdo_mysql doesn't get me any further). I'm runnnig CentOS 4 with php 5.2.9 compiled from source and MySQL 5.1.34

    Read the article

  • Safe use of Update-FormatData?

    - by Steve B
    In a custom PowerShell module, I have at the top of my module definition this code: Update-FormatData -AppendPath (Join-Path $psscriptroot "*.ps1xml") This is working fine as all .ps1xml files are loaded. However, the module is sometimes loaded using Import-Module MyModule -Force (actually, this is in the install script of the module). In this case, the call to Update-FormatData fails with this error : Update-FormatData : There were errors in loading the format data file: Microsoft.PowerShell, c:\pathto\myfile.Types.ext.ps1xml : File skipped because it was already present from "Microsoft.PowerShell". At line:1 char:18 + Update-FormatData <<<< -AppendPath "c:\pathto\myfile.Types.ext.ps1xml" + CategoryInfo : InvalidOperation: (:) [Update-FormatData], RuntimeException + FullyQualifiedErrorId : FormatXmlUpateException,Microsoft.PowerShell.Commands.UpdateFormatDataCommand Is there a way to safely call this command? I know I can call Update-FormatData with no parameters, and it will update any known .ps1xml file, but this would work only if the file has already been loaded. Can I list somewhere the loaded format data files? Here is a bit of background: I'm building a custom module that is installed using a script. The install script looks like : [CmdletBinding(SupportsShouldProcess=$true,ConfirmImpact="High")] param() process { $target = Join-Path $PSHOME "Modules\MyModule" if ($pscmdlet.ShouldProcess("$target","Deploying MyModule module")) { if(!(Test-Path $target)) { new-Item -ItemType Directory -Path $target | Out-Null } get-ChildItem -Path (Split-Path ((Get-Variable MyInvocation -Scope 0).Value).MyCommand.Path) | copy-Item -Destination $target -Force Write-Host -ForegroundColorWhite @" The module has been installed. You can import it using : Import-Module MyModule Or you can add it in your profile ($profile) "@ Write-Warning "To refresh any open PowerShell session, you should run ""Import-Module MyModule -Force"" to reload the module" Import-Module MyModule -Force Write-Warning "This session has been refreshed." } } MyModule defines, as first statement, this line : Update-FormatData -AppendPath (Join-Path $psscriptroot "*.ps1xml") As I updated my $profile to always load this module, the Update-Path command has been called when I run the install script. In the install script, I force import the module, which be fire again the module, and then, the Update-Path call

    Read the article

  • How do you set up CRON?

    - by user1723760
    I have never used CRON before but I want to use CRON in order to be able to perform schedule jobs for a php script. The php script is called "inactivesession.php" and in the php script is this code: <?php include('connect.php'); $createDate = mktime(0,0,0,10,25,date("Y")); $selectedDate = date('d-m-Y', ($createDate)); $sql = "UPDATE Session SET Active = ? WHERE DATE_FORMAT(SessionDate,'%Y-%m-%d' ) <= ?"; $update = $mysqli->prepare($sql); $update->bind_param("is", 0, $selectedDate); $update->execute(); ?> Wht I want to do is that when the above date is reached (25th Oct), I want the php script to perform the UPDATE statement above. But my question is that how do I use CRON in order to do this? The server I am using is the university's server known as helios, does CRON need to be set up in helios, (do I have to call the admin for this) or is it something else which uses CRON. I have never used CRON before so can you explain to me how CRON can be set up for the example above with the server I am using? Thanks UPDATE: Hi, I think the name of the OS is actually helios but I am not sure, I have a wikipedia page on this: here. I will read the crontab wikipedia page and see what I can find, but what my question is that is CRON already set up, do I just go right ahead and use CRON or do I need to set it up first?

    Read the article

  • why won't php 5.3.3 compile libphp5.so on redhat ent

    - by spatel
    I'm trying to upgrade to php 5.3.3 from php 5.2.13. However, the apache module, libphp5.so will not be compiled. Below is a output I got along with the configure options I used. The configure statement is a reduced version of what I normally use. ========== './configure' '--disable-debug' '--disable-rpath' '--with-apxs2=/usr/local/apache2/bin/apxs' ... ** ** ** Warning: inter-library dependencies are not known to be supported. ** ** ** All declared inter-library dependencies are being dropped. ** ** ** Warning: libtool could not satisfy all declared inter-library ** ** ** dependencies of module libphp5. Therefore, libtool will create ** ** ** a static module, that should work as long as the dlopening ** ** ** application is linked with the -dlopen flag. copying selected object files to avoid basename conflicts... Generating phar.php Generating phar.phar PEAR package PHP_Archive not installed: generated phar will require PHP's phar extension be enabled. clicommand.inc pharcommand.inc directorytreeiterator.inc directorygraphiterator.inc invertedregexiterator.inc phar.inc Build complete. Don't forget to run 'make test'. ============= php 5.2.13 recompiles just fine so something is up with 5.3.3. Any help would be greatly appreciated!!

    Read the article

  • 4.4.1 Timeout in 10 minute intervals SMTP on batch email jobs

    - by TEEKAY
    I am running a job that uses SMTP and it can run in excess of an hour, emailing the entire time. It's not my code but a workflow based app so I just get a form to configure the mail server, subj, msg, etc and can't see it's implementation. I know it is .NET and SmtpClient. I have been seeing 4.4.1 timeouts every 10 minutes being reported by the application as the response from the server. The # of emails in those 10 minute sessions are variable, between 100 and below 150 which leads me to ask about the 10 minute timeout time specifically. I have found there are several exchange properties (though I don't know what version they are running) that set timeout limits. (http://technet.microsoft.com/en-us/library/bb232205%28v=exchg.150%29.aspx) Would those values for ConnectionInactivityTimeOut and ConnectionTimeout be the controlling the timeouts? and finally I would like to ask if exchange considers the consistent connection(s) it kept receiving from the same source as one continuous connection and cause the timeout each 10 minutes and cause the timeout? I am using a static ip of the mail server. Thanks if anyone can shed any light on my problem. EDIT - It is my belief that the library is just keeping the connections around and isn't wrapped in any cleanup code or using statement. That said, I still haven't made any progress on this issue in the last year and just requeue the failed ones as I see them.

    Read the article

  • Choice of filesystem for GNU/Linux on an SD card

    - by gspr
    Hi. I have am embedded ARM-based system running on an SD card. It's currently Debian GNU/Linux using ext3 as filesystem. As I'm about to reinstall the system, I started wondering about changing to a more flash-friendly filesystem. I've heard about JFFS2, YAFFS2 and LogFS, and they all seem suited to the job. Which one would you recommend? Also, I've heard there have been a lot of ext4 improvements to better suit SSD disks; am I to interpret that as running ext4 should be just fine? What do I need to think especially about in that case? I guess the usage of the system is important. But for the sake of generality, imagine it'll do standard desktop stuff (even though it is infact a small ARM-based system). Thanks for any replies. Edit: Wikipedia tells me (in a "citation needed" statement) that Removable flash memory cards and USB flash drives have built-in controllers to perform wear leveling and error correction so use of a specific flash file system does not add any benefit. Thus, I'm leaning towards sticking with an ext filesystem.

    Read the article

  • How do I keep a table in Sync across 4 db's to be used in SQL Replication Filtering?

    - by Refracted Paladin
    I have a Win Form, Data Entry, application that uses 4 seperate Data Bases. This is an occasionally connected app that uses Merge Replication (SQL 2005) to stay in Sync. This is working just fine. The next hurdle I am trying to tackle is adding Filters to my Publications. Right now we are replicating 70mbs, compressed, to each of our 150 subscribers when, truthfully, they only need a tiny fraction of that. Using Filters I am able to accomplish this(see code below) but I had to make a mapping table in order to do so. This mapping table consists of 3 columns. A PrimaryID(Guid), WorkerName(varchar), and ClientID(int). The problem is I need this table present in all FOUR Databases in order to use it for the filter since, to my knowledge, views or cross-db query's are not allowed in a Filter Statement. What are my options? Seems like I would set it up to be maintained in 1 Database and then use Triggers to keep it updated in the other 3 Databases. In order to be a part of the Filter I have to include that table in the Replication Set so how do I flag it appropriately. Is there a better way, altogether? SELECT <published_columns> FROM [dbo].[tblPlan] WHERE [ClientID] IN (select ClientID from [dbo].[tblWorkerOwnership] where WorkerID = SUSER_SNAME()) Which allows you to chain together Filters, this next one is below the first one so it only pulls from the first's Filtered Set. SELECT <published_columns> FROM [dbo].[tblPlan] INNER JOIN [dbo].[tblHealthAssessmentReview] ON [tblPlan].[PlanID] = [tblHealthAssessmentReview].[PlanID] P.S. - I know how illogical the DB structure sounds. I didn't make it. I inherited it and was then told to make it a "disconnected app."

    Read the article

  • Setting up apache vhost for Icinga

    - by DKNUCKLES
    It's been a while since I've worked with Apache so please be kind - I'm also aware of this question but it hasn't been much help to me. I'd like to set up a simple vHost w/ Apache for my Icinga instance. Icinga is up and running and I can access it from x.x.x.x/icinga, however would like to be able to access it externally as well as internally. I have set up the /etc/hosts file and the following is my barebones vhost statement in httpd.conf <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot /usr/share/icinga ServerName icinga.domain.com ErrorLog logs/icinga.com-error_log CustomLog logs/dummy-host.example.com-access_log common </VirtualHost> I also have the following in my .htaccess file <Directory> Allow From All Satisfy Any </Directory> An entry has been made for the instance in the Windows DNS server on my network, however when I try to access the site by URL I am greeted with Internal Server Error. Reviewing the /var/log/icinga.com-error_log I see the following entry. [Thu Dec 13 16:04:39 2012] [alert] [client 10.0.0.1] /usr/share/icinga/.htaccess: <Directory not allowed here Can someone help me spot the error of my ways?

    Read the article

  • why won't php 5.3.3 compile libphp5.so on redhat ent

    - by spatel
    I'm trying to upgrade to php 5.3.3 from php 5.2.13. However, the apache module, libphp5.so will not be compiled. Below is a output I got along with the configure options I used. The configure statement is a reduced version of what I normally use. ========== './configure' '--disable-debug' '--disable-rpath' '--with-apxs2=/usr/local/apache2/bin/apxs' ... ** ** ** Warning: inter-library dependencies are not known to be supported. ** ** ** All declared inter-library dependencies are being dropped. ** ** ** Warning: libtool could not satisfy all declared inter-library ** ** ** dependencies of module libphp5. Therefore, libtool will create ** ** ** a static module, that should work as long as the dlopening ** ** ** application is linked with the -dlopen flag. copying selected object files to avoid basename conflicts... Generating phar.php Generating phar.phar PEAR package PHP_Archive not installed: generated phar will require PHP's phar extension be enabled. clicommand.inc pharcommand.inc directorytreeiterator.inc directorygraphiterator.inc invertedregexiterator.inc phar.inc Build complete. Don't forget to run 'make test'. ============= php 5.2.13 recompiles just fine so something is up with 5.3.3. Any help would be greatly appreciated!!

    Read the article

  • Any ideas why Ettercap filters aren't seeing packet data?

    - by Bryan
    I'm using an Ettercap filter to detect a query response coming back from a particular service on a remote machine. When I see a response from the service, I'm searching through the data in the packet to see if an offset is a specific value, and if so I'm changing the value at another offset. Trouble is, when I try this on a new virtual machine I built my Ettercap filter's no longer getting any data in the DATA.data variable available to it. if(ip.proto == TCP && tcp.src == 17867) { msg("Response seen!\n"); if(DATA.data + 2 == "\0x01") { msg("Flag detected!\n"); DATA.data + 5 = 0x09; } } The filter's getting applied to the traffic because "Response seen!" messages get printed out by Ettercap. However, "Flag detected!" messages do not. I think DATA.data is indeed empty because if I change my second "if" statement to check for DATA.data == "" then the "Flag detected!" message gets printed. Any ideas why this may be happening?! Also, if this is the wrong site to be asking questions like this, please let me know. I wasn't sure if it fit better here or somewhere like superuser or serverfault. By the way, this is a cross-post from StackOverflow... I should have posted on this forum instead I think. :)

    Read the article

  • INSERT DELAYED on locked tables blocks PHP processes to continue

    - by sw0x2A
    Our webservers write some tracking information into a MySQL database (using INSERT DELAYED into MyISAM table). When a huge SELECT query is executed on this table or when it is locked for another reason, the webserver processes (with INSERT DELAYED) are waiting for the database and in some cases the MaxServer limit is reached in Apaches, so they will stop serving requests. We use INSERT DELAYED because The DELAYED option for the INSERT statement is a MySQL extension to standard SQL that is very useful if you have clients that cannot or need not wait for the INSERT to complete. This is a common situation when you use MySQL for logging and you also periodically run SELECT and UPDATE statements that take a long time to complete. Quote from MySQL documentation. I am wondering why the Apache processes are waiting for the INSERT DELAYED to finish. And what can I do to just send the data and forget about it. (Since this is logging data, I do not care if we lose some entries.) Even when the table is locked the PHP script should just go on and should not wait for an answer of MySQL. (We do not want to setup Master-slave for this table but we are thinking about move this data to some NoSQL database. But for now I would like to know why INSERT DELAYED is not working as expected.)

    Read the article

  • sql server 2005 instance unresponsive and all db's are 'in recovery'

    - by user44650
    we've got a sql server 2005 instance that one of our guys messed up, i believe they killed the sql server service and restarted the computer, and when it came back all of our databases are "in recovery" and it times out every time we try to connect to it. it's been 'in recovery'and unable to connect to 'msbd' (also in recovery) whenever we try to use SSMC, for the last 4 days now. i'm unsure how to use the DBCC CHECKDB command to check the db integrity. we have backups(which we can't recover from because it keeps timing out), and it's a testing server, so nothing in production is really lost. is there any way to get it out of recovery mode? we have another sqlserver instance running that's just fine, but this instance keeps timing out. the errors i keep seeing are database msdb is being recovered. wait until recovery is finished and an exception occurred while executing a transact-sql statement or batch Timeout expired. any thoughts? we don't really have a DBA here, or anyone with much sql experience.

    Read the article

  • Isn't a hidden volume used when encrypting a drive with TrueCrypt detectable?

    - by neurolysis
    I don't purport to be an expert on encryption (or even TrueCrypt specifically), but I have used TrueCrypt for a number of years and have found it to be nothing short of invaluable for securing data. As relatively well known free, open-source software, I would have thought that TrueCrypt would not have fundamental flaws in the way it operates, but unless I'm reading it wrong, it has one in the area of hidden volume encryption. There is some documentation regarding encryption with a hidden volume here. The statement that concerns me is this (emphasis mine): TrueCrypt first attempts to decrypt the standard volume header using the entered password. If it fails, it loads the area of the volume where a hidden volume header can be stored (i.e. bytes 65536–131071, which contain solely random data when there is no hidden volume within the volume) to RAM and attempts to decrypt it using the entered password. Note that hidden volume headers cannot be identified, as they appear to consist entirely of random data. Whilst the hidden headers supposedly "cannot be identified", is it not possible to, on encountering an encrypted volume encrypted using TrueCrypt, determine at which offset the header was successfully decrypted, and from that determine if you have decrypted the header for a standard volume or a hidden volume? That seems like a fundamental flaw in the header decryption implementation, if I'm reading this right -- or am I reading it wrong?

    Read the article

  • Recieving a 500 Internal Server error after login success?

    - by Jeremy Quick
    I created my first member login form which takes the typical username and password and then sends it to the code below: checklogin.php: mysql_connect($db_host, $db_username, $db_password) or die(mysql_error()); mysql_select_db($db_database) or die(mysql_error()); $username=$_POST['username']; $password=$_POST['password']; $username = stripslashes($username); $password = stripslashes($password); $username = mysql_real_escape_string($username); $password = mysql_real_escape_string($password); $sql="SELECT * FROM users WHERE username='$username' and password='$password'"; $result=mysql_query($sql); $count=mysql_num_rows($result); if($count == 1) //ERROR APPEARS TO TAKE PLACE HERE { session_start(); $_SESSION['username'] = $username; $_SESSION['password'] = $password; header('login_success.php'); } else { header("location:login_fail.php"); } If I type in the wrong information everything works properly so I know the error appears to be taking effect in the marked if statement. I have been searching the internet now looking for solutions but none seem to match mine or I am overlooking them. I've made a few changes which brought me to this point, before I was receiving deprecation warnings. Also, I have checked the logs and they are empty of errors relating to this.

    Read the article

  • Unix sort 10x slower with keys specified

    - by KenFar
    My data: It's a 71 MB file with 1.5 million rows. It has 6 fields, four of which are strings of avg. 15 characters, two are integers. Three of the fields are sometimes empty. All six fields combine to form a unique key - and that's what I need to sort on. Sort statement: sort -t ',' -k1,1 -k2,2 -k3,3 -k4,4 -k5,5 -k6,6 -o a_out.csv a_in.csv The problem: If I sort without keys, it takes 30 seconds. If I sort with keys, it takes 660 seconds. I need to sort with keys to keep this generic and useful for other files that have non-key fields as well. The 30 second timing is fine, but the 660 is a killer. I could theoretically move the temp directory to SSD, and/or split the file into 4 parts, sort them separately (in parallel) then merge the results, etc. But I'm hoping for something simpler since these results are so bad as-is. Any suggestions?

    Read the article

  • Cisco AnyConnect VPN client - prevent connecting as work network

    - by Opmet
    From Windows 7 I'm using "Cisco AnyConnect Secure Mobility Client 3.0" to connect to our corporate network. Every time I establish the VPN connection Windows will set the type as "work network". I don't want this. So I go to "network and sharing center" and manually / interactively change it to "public network". But I have to repeat it for every new VPN connection. Is there any way to make Windows remember / persist this configuration? Can it be configured in the VPN client? Do our IT admins need to change something at server end? Motivation: A "work network" per default uses different firewall settings that allows for stuff like "network discovery" and "file shares". But I just need "remote desktop" (mstsc). Additional info: Our IT admins claimed this would be Windows default behaviour and there was nothing we could do about it: Windows would always initiate a VPN connection as "work network". Based on this statement I assume this is a "general" issue and went ahead posting here (at superuser.com).

    Read the article

  • Unexpected behaviour in a Lotus Notes programmable table

    - by Mark B
    I'm designing a workflow database in Lotus Notes 6.0.3 (soon upgrading to 8.5), and my OS is Windows XP. I have recently tried converting a tabbed table into a programmable one. This was so that I could control which tab was displayed to the user when it was opened, so that they were presented with the most appropriate one for that document's progress through the workflow. That part of it works! One of the tabs features a radio button that controls visibility of the next tab, and a pair of cascading dialogue boxes. One contains the static list "Person":"Team", and the other has a formula based on the first: view:=@If(PeerReview = "Team"; "GroupNames"; "GroupMembers"); @Unique(@DbColumn(""; ""; view; 1)) The dialogue boxes have the property "Refresh fields on keyword change" selected. The behaviour that I wasn't expecting is this. If the radio button is set to "Yes" and a value is selected in one of the dialogue boxes, the table opens the next tab. If the radio button is set to "No" and a value is selected in one of the dialogue boxes, the entire table is hidden. I can duplicate the latter by switching off the "Refresh fields on keyword change" property on the dialogue boxes and instead pressing F9 after selecting a value. I have no idea why the former occurs, though. The table is called "RFCInfo", and I have a field on the form called "$RFCInfo" which is editable, hidden from all users who aren't me and initially set by a Postopen script, which I can post if necessary - it's essentially a Select Case statement that looks at a particular item value and returns the name of the table row relating to that value. Can anyone offer any pointers?

    Read the article

  • Can not open port 3306 on Ubuntu using iptables

    - by user94626
    I am trying to open port 3306 (for remote mysql connections) on my ubuntu 12.04 server machine but for the life of me can't get the damned thing to work! Here is what I did: 1) list current firewall rules: $> sudo iptables -nL -v output: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 225 16984 fail2ban-ssh tcp -- * * 0.0.0.0/0 0.0.0.0/0 multiport dports 22 220 69605 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0 0 0 REJECT all -- lo * 0.0.0.0/0 127.0.0.0/8 reject-with icmp-port-unreachable 486 54824 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 1 60 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 19 988 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:443 1 52 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22 0 0 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0 icmptype 8 4 208 LOG all -- * * 0.0.0.0/0 0.0.0.0/0 limit: avg 5/min burst 5 LOG flags 0 level 7 prefix "iptables denied: " 4 208 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 735 182K ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 Chain fail2ban-ssh (1 references) pkts bytes target prot opt in out source destination 225 16984 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 2) try to connect from remote machine: $> mysql -u root -p -h x.x.x.x output: timeout.... failed to connect 3) try to add a new rule to iptables: iptables -A INPUT -i eth0 -p tcp -m tcp --dport 3306 -j ACCEPT 4) make sure the new rule is added: $> sudo iptables -nL -v output: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 359 25972 fail2ban-ssh tcp -- * * 0.0.0.0/0 0.0.0.0/0 multiport dports 22 251 78665 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0 0 0 REJECT all -- lo * 0.0.0.0/0 127.0.0.0/8 reject-with icmp-port-unreachable 628 64420 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 1 60 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 19 988 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:443 1 52 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22 0 0 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0 icmptype 8 5 260 LOG all -- * * 0.0.0.0/0 0.0.0.0/0 limit: avg 5/min burst 5 LOG flags 0 level 7 prefix "iptables denied: " 5 260 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable 0 0 ACCEPT tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:3306 Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 919 213K ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 Chain fail2ban-ssh (1 references) pkts bytes target prot opt in out source destination 359 25972 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 which appears to be the case (last line in "Chain INPUT" section). 5) try to connect again from remote machine: $> mysql -u root -p -h x.x.x.x output: timeout.... failed to connect which is failing again. 6) try to flush all rules: $> sudo iptables -F 7) this time I CAN CONNECT. 8) reboot server and try to connect, FAILURE. I suspect since the new rule is being appended at the end it will have no effect as there appears to be a "reject all" sort of rule before it. If this is the case, how to make sure the new rule is added in the right order? Otherwise, what am I missing? Please help.

    Read the article

  • Mysql: create index on 1.4 billion records

    - by SiLent SoNG
    I have a table with 1.4 billion records. The table structure is as follows: CREATE TABLE text_page ( text VARCHAR(255), page_id INT UNSIGNED ) ENGINE=MYISAM DEFAULT CHARSET=ascii The requirement is to create an index over the column text. The table size is about 34G. I have tried to create the index by the following statement: ALTER TABLE text_page ADD KEY ix_text (text) After 10 hours' waiting I finally give up this approach. Is there any workable solution on this problem? UPDATE: the table is unlikely to be updated or inserted or deleted. The reason why to create index on the column text is because this kind of sql query would be frequently executed: SELECT page_id FROM text_page WHERE text = ? UPDATE: I have solved the problem by partitioning the table. The table is partitioned into 40 pieces on column text. Then creating index on the table takes about 1 hours to complete. It seems that MySQL index creation becomes very slow when the table size becomes very big. And partitioning reduces the table into smaller trunks.

    Read the article

  • Linux Has Become Very Slow Dealing With Large Data

    - by Kohjah Breese
    Last year I bought a computer, for around $1,800, so it is relatively high-end. When I first got it I was particularly pleased at how quick it dealt with large MySQL queries, imports and exports. But somewhere along the way something has gone wrong and I am not sure how to diagnose the problem. Any job that involves processing large amounts of data, e.g. gzipping file c. 1GB+, UPDATEs on large MySQL tables etc. have become very slow. I just performed an intensive alter statement on a 240,000,000 row table on a remote server, which is lower spec. This took about 10 minutes. However, performing the same query on a 167,000,000 row table on my computer went fine until it hit 860MB. Now it is only writing about 1MB every 15 seconds. Does anyone have any advice as to debugging what the issue is? I am using LinuxMint (based on Ubuntu 12.04.) The home partition is encrypted, which really slows down gzip. I have noticed the swap is barely used, but am not sure if that is because there is more than enough RAM. The filesystem is ext4. The MySQL server is on a separate hard drive, but it was fine when I first installed it. Other than the above issues, there are no other problems with it. I am going to install a fresh Ubuntu on the 4th hard drive to see if that is any different.

    Read the article

  • Can a VM perform better when only two cores instead of four cores are presented to it?

    - by arcain
    We had a VMWare VM at work with two cores allocated to it that ran a pretty heinous process in IIS. Under load the process was maxing out the CPU usage on both cores, so we asked our system engineers to present the other two cores of the physical processor to the VM. The engineer immediately said that this would not improve performance at all, but would make the VM perform worse. That statement didn't make much sense to me, and I'm wondering how what the engineer said could be true. Are there actually cases where four cores presented to a VM would cause worse performance than two cores on the same physical hardware? Let's assume an ideal situation where there's only one VM on the host server, so nothing is being shared with other OS instances. I believe the physical server had a single quad core processor, and was most likely hosting multiple VMs. I don't really know what version of ESX was running on the host, nor do I know with certainty what the physical processor config was, but from within the VM I had access to, I saw two 3.33 GHz AMD processors. In the end, I never got to test the engineer's assertion out because (while we were trying to get the VM upgraded) we were able to optimize the process and reduce it's CPU consumption, and 2) we ended up migrating to a different VM on another ESX server which had four cores presented to it.

    Read the article

  • EXCEL 2007 macro

    - by Binay
    I have a macro which connects to db and fetches data for me and makes it comma separated. But the problem is the comma is getting appended to the last row, which I don't want. I'm struggling here. Could you please help out? Here is the part from the code. If cn.State = adStateOpen Then Rec_set.Open "SELECT concat(trim(Columns_0.ColumnName), ' ','(', 'varchar(2000)' ,')') columnname FROM DBC.Columns Columns_0 WHERE (Columns_0.TableName= " & Chr(39) & Tablename & Chr(39) & "and Columns_0.Databasename=" & Chr(39) & db & Chr(39) & ")ORDER BY Columns_0.Columnid;", cn 'Issue SQL statement If Not Rec_set.EOF And Not Rec_set.EOF Then Do Until Rec_set.EOF For i = 0 To Rec_set.Fields.Count - 1 strString = strString & Rec_set(i) & "," Next strFile.WriteLine (strString) strString = "" Rec_set.MoveNext Loop Here is the result I am getting. EMPNO (varchar(2000)), ENAME (varchar(2000)), JOB (varchar(2000)), MGR (varchar(2000)), HIREDATE (varchar(2000)), SAL (varchar(2000)), COMM (varchar(2000)), DEPTNO (varchar(2000)), I don't want the last comma.

    Read the article

< Previous Page | 527 528 529 530 531 532 533 534 535 536 537 538  | Next Page >