Search Results

Search found 43235 results on 1730 pages for 'database connection'.

Page 119/1730 | < Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >

  • Database Mirroring on SQL Server Express Edition

    - by Most Valuable Yak (Rob Volk)
    Like most SQL Server users I'm rather frustrated by Microsoft's insistence on making the really cool features only available in Enterprise Edition.  And it really doesn't help that they changed the licensing for SQL 2012 to be core-based, so now it's like 4 times as expensive!  It almost makes you want to go with Oracle.  That, and a desire to have Larry Ellison do things to your orifices. And since they've introduced Availability Groups, and marked database mirroring as deprecated, you'd think they'd make make mirroring available in all editions.  Alas…they don't…officially anyway.  Thanks to my constant poking around in places I'm not "supposed" to, I've discovered the low-level code that implements database mirroring, and found that it's available in all editions! It turns out that the query processor in all SQL Server editions prepends a simple check before every edition-specific DDL statement: IF CAST(SERVERPROPERTY('Edition') as nvarchar(max)) NOT LIKE '%e%e%e% Edition%' print 'Lame' else print 'Cool' If that statement returns true, it fails. (the print statements are just placeholders)  Go ahead and test it on Standard, Workgroup, and Express editions compared to an Enterprise or Developer edition instance (which support everything). Once again thanks to Argenis Fernandez (b | t) and his awesome sessions on using Sysinternals, I was able to watch the exact process SQL Server performs when setting up a mirror.  Surprisingly, it's not actually implemented in SQL Server!  Some of it is, but that's something of a smokescreen, the real meat of it is simple filesystem primitives. The NTFS filesystem supports links, both hard links and symbolic, so that you can create two entries for the same file in different directories and/or different names.  You can create them using the MKLINK command in a command prompt: mklink /D D:\SkyDrive\Data D:\Data mklink /D D:\SkyDrive\Log D:\Log This creates a symbolic link from my data and log folders to my Skydrive folder.  Any file saved in either location will instantly appear in the other.  And since my Skydrive will be automatically synchronized with the cloud, any changes I make will be copied instantly (depending on my internet bandwidth of course). So what does this have to do with database mirroring?  Well, it seems that the mirroring endpoint that you have to create between mirror and principal servers is really nothing more than a Skydrive link.  Although it doesn't actually use Skydrive, it performs the same function.  So in effect, the following statement: ALTER DATABASE Mir SET PARTNER='TCP://MyOtherServer.domain.com:5022' Is turned into: mklink /D "D:\Data" "\\MyOtherServer.domain.com\5022$" The 5022$ "port" is actually a hidden system directory on the principal and mirror servers. I haven't quite figured out how the log files are included in this, or why you have to SET PARTNER on both principal and mirror servers, except maybe that mklink has to do something special when linking across servers.  I couldn't get the above statement to work correctly, but found that doing mklink to a local Skydrive folder gave me similar functionality. To wrap this up, all you have to do is the following: Install Skydrive on both SQL Servers (principal and mirror) and set the local Skydrive folder (D:\SkyDrive in these examples) On the principal server, run mklink /D on the data and log folders to point to SkyDrive: mklink /D D:\SkyDrive\Data D:\Data On the mirror server, run the complementary linking: mklink /D D:\Data D:\SkyDrive\Data Create your database and make sure the files map to the principal data and log folders (D:\Data and D:\Log) Viola! Your databases are kept in sync on multiple servers! One wrinkle you will encounter is that the mirror server will show the data and log files, but you won't be able to attach them to the mirror SQL instance while they are attached to the principal. I think this is a bug in the Skydrive, but as it turns out that's fine: you can't access a mirror while it's hosted on the principal either.  So you don't quite get automatic failover, but you can attach the files to the mirror if the principal goes offline.  It's also not exactly synchronous, but it's better than nothing, and easier than either replication or log shipping with a lot less latency. I will end this with the obvious "not supported by Microsoft" and "Don't do this in production without an updated resume" spiel that you should by now assume with every one of my blog posts, especially considering the date.

    Read the article

  • User connection management in Reporting Services configuration

    - by Testas
    IT professionals will use Reporting Services Configuration Manager to perform post installation tasks for SQL Server Reporting Services. Introduced in SQL Server 2005, Reporting Services Configuration Manager provides an intuitive interface to perform tasks including specifying the report server database, report manager url, and indeed one of the first post installation tasks that should be performed is backing up the encryption keys that are used to protect the sensitive information within the rdl files.  Many of the options that are selected within Reporting Services Configuration Manager are written to a number of configuration files including the rsreportserver.config file located in C:\Program Files\Microsoft SQL Server\Report Server InstanceName\Reporting Services\ReportServer folder.When opening this file you will notice that there are more configuration settings within the rsreportserver.config file than is available through the Reporting Services Configuration Manager Interface. As a result there are additional configuration options that can be defined within this file.  A customer was having a problem performing stress tests against a new Report Server that would be going live for an enterprise reporting system. One aspect of the stress test was to fire 50 connections from a single user account. When performing the stress test an error described that the maximum active request had been exceeded. Within the rsreportserver.config, there is a key that is added to the file:  <Add Key=”MaxActiveReqForOneUser” Value=”20”/>  Changing the value from 20 to 50 accommodated the needs of the stress test, however, a wider question should be asked pertaining to this setting when implementing Reporting Services to a production environment. Within an intranet environment, the default setting is appropriate when network bandwidth is high, users are known and demand for reports is particularly high from a group of users.  However, when deploying a Reporting Server solution to an extranet, or the internet, you may want to consider reducing this setting to reduce to scope of connections that can be acquired by a single user and placing unnecessary pressure on the report server. I do hope that Reporting Services Configuration Manager evolves to include an advanced page that includes an intuitive interface to change configuration settings such as the MaxActiveReqForOneUser, and also configure rendering and data extensions and define secure connection levels to the report server. All these options can be configured within the rsreportserver.config file, and these are setting that customers would like to see in Reporting Services Configuration Manager in the future.   If you think that the SQL community would benefit from this addition, you can vote on it at Microsoft Connect  https://connect.microsoft.com/SQLServer/feedback/details/565575/extending-reporting-services-configuration-manager-rscm    

    Read the article

  • Big Data – ClustrixDB – Extreme Scale SQL Database with Real-time Analytics, Releases Software Download – NewSQL

    - by Pinal Dave
    There are so many things to learn and there is so little time we all have. As we have little time we need to be selective to learn whatever we learn. I believe I know quite a lot of things in SQL but I still do not know what is around SQL. I have started to learn about NewSQL recently. If you wonder what is NewSQL I encourage all of you to read my blog post about NewSQL over here Big Data – Buzz Words: What is NewSQL – Day 10 of 21. NewSQL databases are quickly becoming popular – providing the scale of NoSQL with the SQL features and transactions. As a part of learning NewSQL database, I have recently started to learn about ClustrixDB. ClustrixDB has been the most mature NewSQL database used by some of the largest internet sites in the world for over 3 years, with extensive SQL support. In addition to scale, it provides fast real-time analytics by bringing massively parallel processing (MPP), available only in warehousing databases, to the transactional database. The reason I am more intrigued about learning ClustrixDB is their recent announcement on Oct 31. ClustrixDB was only available as an appliance, but now with their software release on Oct 31, everyone can use it. It is now available as forever free for up to 12 cores with community support, and there is a 45 day trial for unlimited cluster sizes. With the forever free world, I am indeed interested in ClustrixDB now. I know that few of the leading eCommerce sites in the world uses them for their transactional database. Here are few of the details I have quickly noted for ClustrixDB. ClustrixDB allows user to: Scale by simply adding nodes to the cluster with a single command Run billions of transactions a day Run fast real-time analytics Achieve high-availability with recovery from node failure Manages itself Easily migrate from MySQL as it is nearly plug-and-play compatible, use MySQL drivers, tools and replication. While I was going through the documentation I realized that ClustrixDB also has extensive support for SQL features including complex queries involving joins on a dozen or more tables, aggregates, sorts, sub-queries. It also supports stored procedures, triggers, foreign keys, partitioned and temporary tables, and fully online schema changes. It is indeed a very matured product and SQL solution. Indeed Clusterix sound very promising solution, I decided to dig a bit deeper to understand who are current customers of the Clustrix as they exist in the industry for quite a few years. Their client list is indeed very interesting and here is my quick research about them. Twoo.com – Europe’s largest social discovery (dating) site runs 4.4 Billion Transactions a day with table sizes over a Terabyte, on a 168 core cluster. EngageBDR – Top 3 in the online advertising category uses ClustrixDB to serve 6.9 billion ads a day through real-time bidding platform. Their reports went from 4 hours to 15 seconds. NoMoreRack – Top 2 fastest growing e-commerce company in US used ClustrixDB for high availability and fast growth through Amazon cloud. MakeMyTrip – India’s leading travel site runs on ClustrixDB with two clusters running as multi-master in Chennai and Bangalore. Many enterprises such as AOL, CSC, Rakuten, Symantec use ClustrixDB when their applications need scale. I must accept that I am impressed with the information I have learned so far and now is the time to do some hand’s on experience with their product. I want to learn this technology so in future when it is about NewSQL, I know what I am talking about. Read more why Clustrix explains why you ClustrixDB might be the right database for you. Download ClustrixDB with me today and install it on your machine so in future when we discuss the technical aspects of it, we all are on the same page. The software can be downloaded here. Reference : Pinal Dave (http://blog.SQLAuthority.com)Filed under: Big Data, MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Clustrix

    Read the article

  • Client-Server connection response timeout issues

    - by Srikar
    User creates a folder in client and in the client-side code I hit an API to the server to make this persistent for that user. But in some cases, my server is so busy that the request timesout. The server has executed my request but timedout before sending a response back to client. The timeout set is 10 seconds in client. At this point the client thinks that server has not executed its request (of creating a folder) and ends up sending it again. Now I have 2 folders on the server but the user has created only 1 folder in the client. How to prevent this? One of the ways to solve this is to use a unique ID with each new request. So the ID acts as a distinguisher between old and new requests from client. But this leads to storing these IDs on my server and do a lookup for each API call which I want to avoid. Other way is to increase the timeout duration. But I dont want to change this from 10 seconds. Something tells me that there are better solutions. I have posted this question in stackoverflow but I think its better suited here. UPDATE: I will make my problem even more explicit. The client is a webbrowser and the server is running nginx+django+mysql (standard stack). The user creates a folder in webbrowser. As a result I need to hit a server API. The API call responds back, thereby client knows API call was success. This is normal scenario. Sometimes though, server successfully completes the API request but the client-side (webbrowser) connection timesout before server can respond back. The client has no clue at this point. The user thinks the request was a fail & clicks again. This time it was a success but when the UI refreshes he sees 2 folders. I want to remedy this situation.

    Read the article

  • i want to access mysql database table on given conditions in drop down menu [on hold]

    - by user3909877
    as the code below is accesing the database table directly but i want it to display the table content on giving conditions in drop down menu like when i select islamabad in one drop down menu and lahore in other as given in code and press search buttonn then it display the table flights.but it is displaying it directly <p class="h2">Quick Search</p> <div class="sb2_opts"> <p> </p> <form method="post" action=""> <p>Enter your source and destination.</p> <p> From:</p> <select name="from"> <option value="Islamabad">Islamabad</option> <option value="Lahore">Lahore</option> <option value="murree">Murree</option> <option value="Muzaffarabad">Muzaffarabad</option> </select> <p> To:</p> <select name="To"> <option value="Islamabad">Islamabad</option> <option value="Lahore">Lahore</option> <option value="murree">Murree</option> <option value="Muzaffarabad">Muzaffarabad</option> </select> <input type="submit" value="search" /> </form> </form> </table> <?php $from = isset($_POST['from'])?$_POST['from']:''; $to = isset($_POST['to'])?$_POST['to']:''; if( $from =='Islamabad'){ if($to == 'Lahore'){ $db_host = 'localhost'; $db_user = 'root'; $database = 'homedb'; $table = 'flights'; if (!mysql_connect($db_host, $db_user)) die("Can't connect to database"); if (!mysql_select_db($database)) die("Can't select database"); $result = mysql_query("SELECT * FROM {$table}"); if (!$result) { die("Query to show fields from table failed"); } $result = mysql_query("SELECT * FROM {$table}"); if (!$result) { die("Query to show fields from table failed"); } $fields_num = mysql_num_fields($result); echo "<h1>Table: {$table}</h1>"; echo "<table border='1'><tr>"; while($row = mysql_fetch_row($result)) { echo "<tr>"; // $row is array... foreach( .. ) puts every element // of $row to $cell variable foreach($row as $cell) echo "<td>$cell</td>"; echo "</tr>\n"; } } } mysqli_close($con); ?>

    Read the article

  • Mongodb vs. Cassandra

    - by ming yeow
    I am evaluating what might be the best migration option. Currently, i am on a sharded mysql (horizontal partition), with most of my data stored in json blobs. I do not have any complex SQL queries( already migrated away after since I partitioned my db) Right now, it seems like both Mongodb and Cassandra would be likely options. My situation lots of reads in every query, less regular writes not worried about "massive" scalability more concerned about simple setup, maintenance and code minimize hardware/server cost

    Read the article

  • Postgres user drop

    - by Grasper
    I am trying to drop a user: drop user testUser; I want to force this to work in a simple manner (Not a million calls)... How can I do this easily? I get this output: ERROR: role "testUser" cannot be dropped because some objects depend on it DETAIL: access to table main.tap_db_version access to table main.user_instance access to table main.target_type access to table main.status_code access to table main.state_space_profile access to table main.service_subscription access to table main.service_instance access to table main.sa_ordnance_weapon_type access to table main.operation access to table main.mission_class access to table main.map_symbol access to table main.ada_weapon_type access to table main.active_process access to table main.acft_type_00_only access to table main.abp_create_params access to table main.exercise access to table main.decl access to table main.data_set access to table main.cancellation_notice access to table main.ato_family_tree access to table main.apportionment_cat_cd access to table main.abp access to table main.alert_settings access to table main.alert_log access to table main.airspace_usage_category access to schema main access to view testUser.top_priority access to view testUser.target_ssm_msn_count access to view testUser.target_air_msn_count access to view testUser.sortie_sum access to view testUser.ref_info access to view testUser.preview_rmk_count access to view testUser.preview_pgm_las_count access to view testUser.preview_pgm_desi_count access to view testUser.preview_objective_count access to view testUser.preview_gfriend_count access to view testUser.preview_escort_msn_req access to view testUser.preview_chaff_data access to view testUser.preview_airmove_seg access to view testUser.preview_aircraft_total access to view testUser.offload_total access to view testUser.objective_count access to view testUser.fuel_planned access to view testUser.ew_data access to view testUser.dual access to view testUser.current_base_inventory access to view testUser.cell_total access to view testUser.asgn_sortie_sum access to view testUser.appor_sorties_planned access to view testUser.airmove_seg access to view testUser.aircraft_total access to view testUser.abp access to table testUser.req_msn_task access to table testUser.req_task_source_req access to table testUser.req_ssm_msn access to table testUser.req_ssm_source access to table testUser.req_msn access to table testUser.req_msn_warnings access to table testUser.req_air_msn access to table testUser.req_src_header access to table testUser.req_msn_ids access to table testUser.req_msn_comment access to table testUser.req_c2_msn access to table testUser.req_c2_source access to table testUser.req_ada_msn access to table testUser.req_ada_vertex access to table testUser.weather_forecast access to table testUser.weather_coords access to table testUser.weather_area access to table testUser.weapon_option access to table testUser.wag_activity access to table testUser.unit_remark access to table testUser.unit_location_turn access to table testUser.unit_iff access to table testUser.unit_coordination access to table testUser.unit_code access to table testUser.trace_point access to table testUser.tasking_agency access to table testUser.task_unit access to table testUser.target_type access to table testUser.tap_db_version access to table testUser.status_code access to table testUser.state_space_threat access to table testUser.state_space_profile access to table testUser.state_space access to table testUser.ssm_mission access to table testUser.spins_section_id access to table testUser.spins_codes access to table testUser.spins access to table testUser.unit_location access to table testUser.ship_target_request access to table testUser.service_subscription access to table testUser.service_instance access to table testUser.sa_ordnance_weapon_type access to table testUser.runway access to table testUser.restricted_codes access to table testUser.response_entity access to table testUser.residual_mission access to table testUser.request_objective access to table testUser.request and 194 other objects (see server log for list)

    Read the article

  • Objective-C SSL Synchronous Connection

    - by Mike
    Hello, I'm a little new to objective-C but have run across a problem that I can't solve, mostly because I'm not sure I am implementing the solution correctly. I am trying to connect using a Synchronous Connection to a https site with a self-signed certificate. I am getting the Error Domain=NSURLErrorDomain Code=-1202 "untrusted server certificate" Error that I have seen some solutions to on this forum. The solution i found was to add: - (BOOL)connection:(NSURLConnection *)connection canAuthenticateAgainstProtectionSpace:(NSURLProtectionSpace *)protectionSpace { return [protectionSpace.authenticationMethod isEqualToString:NSURLAuthenticationMethodServerTrust]; } (void)connection:(NSURLConnection *)connection didReceiveAuthenticationChallenge:(NSURLAuthenticationChallenge *)challenge { [challenge.sender useCredential:[NSURLCredential credentialForTrust:challenge.protectionSpace.serverTrust] forAuthenticationChallenge:challenge]; } to the NSURLDelegate to accept all certificates. When I connect to the site using just a: NSURLRequest *theRequest=[NSURLRequest requestWithURL:[NSURL URLWithString:@"https://examplesite.com/"] cachePolicy:NSURLRequestUseProtocolCachePolicy timeoutInterval:60.0]; NSURLConnection *theConnection=[[NSURLConnection alloc] initWithRequest:theRequest delegate:self]; It works fine and I see the challenge being accepted. However when I try to connect using the synchronous connection I still get the error and I don't see the challenge functions being called when I put in logging. How can I get the synchronous connection to use the challenge methods? Is it something to do with the delegate:self part of the URLConnection? I also have logging for sending/receiving data within the NSURLDelegate that is called by my connection function but not by the synchronous function. What I am using for the synchronous part: NSMutableURLRequest *request = [NSMutableURLRequest requestWithURL: [NSURL URLWithString:@"https://examplesite.com/"]]; [request setHTTPMethod: @"POST"]; [request setHTTPBody: [[NSString stringWithString:@"username=mike"] dataUsingEncoding: NSUTF8StringEncoding]]; dataReply = [NSURLConnection sendSynchronousRequest:request returningResponse:&response error:&error]; NSLog(@"%@", error); stringReply = [[NSString alloc] initWithData:dataReply encoding:NSUTF8StringEncoding]; NSLog(@"%@", stringReply); [stringReply release]; NSLog(@"Done"); Like I mentioned I'm a little new to objective C so be kind :) Thanks for any help. Mike

    Read the article

  • Postgres user/role drop

    - by Grasper
    I am trying to drop a user: drop user testUser; I want to force this to work in a simple manner (Not a million calls)... How can I do this easily? I get this output: ERROR: role "testUser" cannot be dropped because some objects depend on it DETAIL: access to table main.tap_db_version access to table main.user_instance access to table main.target_type access to table main.status_code access to table main.state_space_profile access to table main.service_subscription access to table main.service_instance access to table main.sa_ordnance_weapon_type access to table main.operation access to table main.mission_class access to table main.map_symbol access to table main.ada_weapon_type access to table main.active_process access to table main.acft_type_00_only access to table main.abp_create_params access to table main.exercise access to table main.decl access to table main.data_set access to table main.cancellation_notice access to table main.ato_family_tree access to table main.apportionment_cat_cd access to table main.abp access to table main.alert_settings access to table main.alert_log access to table main.airspace_usage_category access to schema main access to view testUser.top_priority access to view testUser.target_ssm_msn_count access to view testUser.target_air_msn_count access to view testUser.sortie_sum access to view testUser.ref_info access to view testUser.preview_rmk_count access to view testUser.preview_pgm_las_count access to view testUser.preview_pgm_desi_count access to view testUser.preview_objective_count access to view testUser.preview_gfriend_count access to view testUser.preview_escort_msn_req access to view testUser.preview_chaff_data access to view testUser.preview_airmove_seg access to view testUser.preview_aircraft_total access to view testUser.offload_total access to view testUser.objective_count access to view testUser.fuel_planned access to view testUser.ew_data access to view testUser.dual access to view testUser.current_base_inventory access to view testUser.cell_total access to view testUser.asgn_sortie_sum access to view testUser.appor_sorties_planned access to view testUser.airmove_seg access to view testUser.aircraft_total access to view testUser.abp access to table testUser.req_msn_task access to table testUser.req_task_source_req access to table testUser.req_ssm_msn access to table testUser.req_ssm_source access to table testUser.req_msn access to table testUser.req_msn_warnings access to table testUser.req_air_msn access to table testUser.req_src_header access to table testUser.req_msn_ids access to table testUser.req_msn_comment access to table testUser.req_c2_msn access to table testUser.req_c2_source access to table testUser.req_ada_msn access to table testUser.req_ada_vertex access to table testUser.weather_forecast access to table testUser.weather_coords access to table testUser.weather_area access to table testUser.weapon_option access to table testUser.wag_activity access to table testUser.unit_remark access to table testUser.unit_location_turn access to table testUser.unit_iff access to table testUser.unit_coordination access to table testUser.unit_code access to table testUser.trace_point access to table testUser.tasking_agency access to table testUser.task_unit access to table testUser.target_type access to table testUser.tap_db_version access to table testUser.status_code access to table testUser.state_space_threat access to table testUser.state_space_profile access to table testUser.state_space access to table testUser.ssm_mission access to table testUser.spins_section_id access to table testUser.spins_codes access to table testUser.spins access to table testUser.unit_location access to table testUser.ship_target_request access to table testUser.service_subscription access to table testUser.service_instance access to table testUser.sa_ordnance_weapon_type access to table testUser.runway access to table testUser.restricted_codes access to table testUser.response_entity access to table testUser.residual_mission access to table testUser.request_objective access to table testUser.request and 194 other objects (see server log for list)

    Read the article

  • How to connect a Bluetooth network connection using the command line

    - by edg
    I can enable a Local Area Network interface for my machine with the command netsh interface set interface "Local Area Connection" ENABLED Is there an equivalent command to connect a bluetooth network connection? I've tried netsh interface set interface "Bluetooth" ENABLED but it seems to have no effect, the connection remains disconnected. I also tried netsh interface set interface "Bluetooth" connect=CONNECTED but this returns One or more essential parameters not specified

    Read the article

  • UnboundLocalError: local variable 'rows' referenced before assignment

    - by patrick
    i'm trying to make a database connection by an other script. But the script didn't work propperly. and if I do a 'print' on the rows then I get the value 'null' But if I use a 'select * from incidents' query then i get the result from the table incidents. import database rows = database.database("INSERT INTO incidents VALUES(3 ,'test_title1', 'test', TO_DATE('25-07-2012', 'DD-MM-YYYY'), CURRENT_TIMESTAMP, 'sector', 50, 60)") #print database.database() print rows database.py script: import psycopg2 import sys import logfile def database(query): logfile.log(20, 'database.py', 'Executing...') con = None try: con = psycopg2.connect(database='incidents', user='ipfit5', password='tester') cur = con.cursor() #print query cur.execute(query) rows = cur.fetchall() con.commit() #test row does work #cur.execute("INSERT INTO incidents VALUES(3 ,'test_titel1', 'test', TO_DATE('25-07-2012', 'DD-MM-YYYY'), CURRENT_TIMESTAMP, 'sector', 50, 60)") except: logfile.log(40, 'database.py', 'Er is iets mis gegaan') logfile.log(40, 'database.py', str(sys.exc_info())) finally: if con: con.close() return rows

    Read the article

  • Windows XP/7: custom routing for VPN connection

    - by Peter Becker
    We are dealing with a badly configured VPN connection from a vendor, which set up the default gateway but doesn't route traffic anywhere beyond their VPN zone. I managed to do some ad-hoc routing to configure a computer in a way that it can reach the vendor's VPN, our local network as well as the internet. I then tried to turn this into a script, but that failed since the interface number of the VPN changes on every connection. Is there a way in Windows XP and/or Windows 7 to configure custom routing on the client side of a VPN connection? What I would like to do is to have a script running just after the connection comes up that changes the routing table (similar to an ifup script on UNIX).

    Read the article

  • DSL connection dropping when starting game

    - by bootoo
    New connection set up last week - have called tech support, signal fine, sending out tech guy tomorrow - however i cant make sense of this using Vista, 2wire connection 6mb ATT conn, wireless, i can browse website, stream video from my PC wirelessly to my xbox, watch Hulu in high res, connection seems fine I load up World of Warcraft, i select Enter World and without fail my 2wire modem 'DSL' light goes red, my internet light goes out, im no longer connected, then after 10 or 15 seconds it works again, but game has kicked me Noticed the same thing on opening or closing Utorrent as well No other phones/devices plugged into the phone jacks in the apartment - why would joining a game cause my DSL to drop? i can only guess it throws a bunch of information to the server when i hit 'enter world' and that causes an issue that shuts my connection down or my router hates me - all help appreciated

    Read the article

  • Is it possible for double-escaping to cause harm to the DB?

    - by waiwai933
    If I accidentally double escape a string, can the DB be harmed? For the purposes of this question, let's say I'm not using parametrized queries For example, let's say I get the following input: bob's bike And I escape that: bob\'s bike But my code is horrible, and escapes it again: bob\\\'s bike Now, if I insert that into a DB, the value in the DB will be bob\'s bike Which, while is not what I want, won't harm the DB. Is it possible for any input that's double escaped to do something malicious to the DB assuming that I take all other necessary security precautions?

    Read the article

  • ERD Design help meeded

    - by Mobi
    Hello guyz, I am new to ERD and stuff.Earlier i was drawing an erd that issued me some problems. the name of two entities in focus is "Bus" and "Passenger".What shall be the relationship between them. I think it should be many to many since one passenger can travel in many buses and a bus can give ride to many passengers.But one of my friend insisted that its a one-to-many relationship(A bus can have many passengers but a passenger can travel in only one bus).Plz let me know what's right. Also , whats the relationship between a class,students. Any help is appreciated.

    Read the article

  • What database is easy to maintain and manage in a cluster?

    - by Sanoj
    I'm looking for a database (DBMS) that is easy to scale out. I would like to have high availability so I need a multi-master cluster, where the data is replicated to two or more physical computers. I would also like to be able to start with one node (no replication), and then scale out to more nodes as needed without a reinstallation or downtime. I would like to have a DBMS that are easy to maintain and manage. It should be easy to add nodes, remove nodes, take live backup and monitor the use of resources. It doesn't have to be a relational database system, so a NoSQL is okey. And I would like to have a free version so I can test it in small scale and compare it with alternatives. What alternatives do I have?

    Read the article

  • Will a database server perform better running on 2 CPUs with 16 cores or 4 CPUs with 8 cores?

    - by AlexOdin
    What I have: an online financial application (ASP.NET, C#) at peak we have 5K+ simultaneous users backend is running on Oracle 11g (active server + stand-by using Active Data Guard). At peak - 4K-5K database sessions Oracle is installed on Linux 5.8 (Oracle's unbreakable version) the database size: 7TB disk storage: NetApp (connected with 10GB network) I would like to replace old servers (IT will purchase HP blades BL685C). Servers will have 256GB of RAM. I need your help to figure out what to do with CPUs and cores. Options: 2 CPUs (2.3 GHz) with 16 cores each 4 CPUs (3.0 GHz) with 8 cores each Question: Which one should I pick? P.S. Next year, we will migrate from Oracle to SQL server. I hope, whatever option you recommend will work for both platforms

    Read the article

  • recursive delete trigger and ON DELETE CASCADE contraints are not deleting everything

    - by bitbonk
    I have a very simple datamodel that represents a tree structure: The RootEntity is the root of such a tree, it can contain children of type ContainerEntity and of type AtomEntity. The type ContainerEntity again can contain children of type ContainerEntity and of type AtomEntity but can not contain children of type RootEntity. Children are referenced in a well known order. The DB model for this is below. My problem now is that when I delete a RootEntity I want all children to be deleted recursively. I have create foreign key with CASCADE DELETE and two delete triggers for this. But it is not deleting everything, it always leaves some items in the ContainerEntity, AtomEntity, ContainerEntity_Children and AtomEntity_Children tables. Seemling beginning with the recursionlevel of 3. CREATE TABLE RootEntity ( Id UNIQUEIDENTIFIER NOT NULL, Name VARCHAR(500) NOT NULL, CONSTRAINT PK_RootEntity PRIMARY KEY NONCLUSTERED (Id), ); CREATE TABLE ContainerEntity ( Id UNIQUEIDENTIFIER NOT NULL, Name VARCHAR(500) NOT NULL, CONSTRAINT PK_ContainerEntity PRIMARY KEY NONCLUSTERED (Id), ); CREATE TABLE AtomEntity ( Id UNIQUEIDENTIFIER NOT NULL, Name VARCHAR(500) NOT NULL, CONSTRAINT PK_AtomEntity PRIMARY KEY NONCLUSTERED (Id), ); CREATE TABLE RootEntity_Children ( ParentId UNIQUEIDENTIFIER NOT NULL, OrderIndex INT NOT NULL, ChildContainerEntityId UNIQUEIDENTIFIER NULL, ChildAtomEntityId UNIQUEIDENTIFIER NULL, ChildIsContainerEntity BIT NOT NULL, CONSTRAINT PK_RootEntity_Children PRIMARY KEY NONCLUSTERED (ParentId, OrderIndex), -- foreign key to parent RootEntity CONSTRAINT FK_RootEntiry_Children__RootEntity FOREIGN KEY (ParentId) REFERENCES RootEntity (Id) ON DELETE CASCADE, -- foreign key to referenced (child) ContainerEntity CONSTRAINT FK_RootEntiry_Children__ContainerEntity FOREIGN KEY (ChildContainerEntityId) REFERENCES ContainerEntity (Id) ON DELETE CASCADE, -- foreign key to referenced (child) AtomEntity CONSTRAINT FK_RootEntiry_Children__AtomEntity FOREIGN KEY (ChildAtomEntityId) REFERENCES AtomEntity (Id) ON DELETE CASCADE, ); CREATE TABLE ContainerEntity_Children ( ParentId UNIQUEIDENTIFIER NOT NULL, OrderIndex INT NOT NULL, ChildContainerEntityId UNIQUEIDENTIFIER NULL, ChildAtomEntityId UNIQUEIDENTIFIER NULL, ChildIsContainerEntity BIT NOT NULL, CONSTRAINT PK_ContainerEntity_Children PRIMARY KEY NONCLUSTERED (ParentId, OrderIndex), -- foreign key to parent ContainerEntity CONSTRAINT FK_ContainerEntity_Children__RootEntity FOREIGN KEY (ParentId) REFERENCES ContainerEntity (Id) ON DELETE CASCADE, -- foreign key to referenced (child) ContainerEntity CONSTRAINT FK_ContainerEntity_Children__ContainerEntity FOREIGN KEY (ChildContainerEntityId) REFERENCES ContainerEntity (Id) ON DELETE CASCADE, -- foreign key to referenced (child) AtomEntity CONSTRAINT FK_ContainerEntity_Children__AtomEntity FOREIGN KEY (ChildAtomEntityId) REFERENCES AtomEntity (Id) ON DELETE CASCADE, ); CREATE TRIGGER Delete_RootEntity_Children ON RootEntity_Children FOR DELETE AS DELETE FROM ContainerEntity WHERE Id IN (SELECT ChildContainerEntityId FROM deleted) DELETE FROM AtomEntity WHERE Id IN (SELECT ChildAtomEntityId FROM deleted) GO CREATE TRIGGER Delete_ContainerEntiy_Children ON ContainerEntity_Children FOR DELETE AS DELETE FROM ContainerEntity WHERE Id IN (SELECT ChildContainerEntityId FROM deleted) DELETE FROM AtomEntity WHERE Id IN (SELECT ChildAtomEntityId FROM deleted) GO

    Read the article

  • Best practice stock management when payment of customer failed using SQL Server and ASP.NET

    - by Martijn B
    Hi there, I am currently building a webshop for my own where I want to increment the product-stock when the user fails to complete payment within 10 minutes after the customer placed the order. I want to gather information from this thread to make a design decision. I am using SQL Server 2008 and ASP.NET 3.5. Should I use a SQL Server Job who intervals check the orders which are not payed yet or are there better solutions to do this. Thanks in advance! Martijn

    Read the article

  • Database server: Small quick RAM or large slow RAM?

    - by Josh Smeaton
    We are currently designing our new database servers, and have come up with a trade off I'm not entirely sure of how to answer. These are our options: 48GB 1333MHz, or 96GB 1066MHz. My thinking is that RAM should be plentiful for a Database Server (we have plenty and plenty of data, and some very large queries) rather than as quick as it could be. Apparently we can't get 16GB chips at 1333MHz, hence the choices above. So, should we get lots of slower RAM, or less faster RAM? Extra Info: Number of DIMM Slots Available: 6 Servers: Dell Blades CPU: 6 core (only single socket due to Oracle licensing).

    Read the article

  • SSH: Connection Reset by Peer

    - by hopeless
    I have a Solaris 10 server on another network. I can ping it and telnet to it, but ssh doesn't connect. PuTTY log contains nothing of interest (they both negotiate to ssh v2) and then I get "Event Log: Network error: Software caused connection abort". ssh is defintely running: svcs -a | grep ssh online 12:12:04 svc:/network/ssh:default Here's an extract from the server's /var/adm/messages (anonymised) Jun 8 19:51:05 ******* sshd[26391]: [ID 800047 auth.crit] fatal: Read from socket failed: Connection reset by peer However, if I telnet to the box, I can login to ssh locally. I can also ssh to other (non-Solaris) machines on that network fine so I don't believe that it's a network issue (though, since I'm a few hundred miles away, I can't be sure). The server's firewall is disabled, so that shouldn't be a problem root@******** # svcs -a | grep -i ipf disabled Apr_27 svc:/network/ipfilter:default Any ideas what I should start checking? Update: Based on the feedback below, I've run sshd in debug mode. Here's the client output: $ ssh -vvv root@machine -p 32222 OpenSSH_5.0p1, OpenSSL 0.9.8h 28 May 2008 debug2: ssh_connect: needpriv 0 debug1: Connecting to machine [X.X.X.X] port 32222. debug1: Connection established. debug1: identity file /home/lawrencj/.ssh/identity type -1 debug1: identity file /home/lawrencj/.ssh/id_rsa type -1 debug1: identity file /home/lawrencj/.ssh/id_dsa type -1 debug1: Remote protocol version 2.0, remote software version Sun_SSH_1.1 debug1: no match: Sun_SSH_1.1 debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.0 debug2: fd 3 setting O_NONBLOCK debug1: SSH2_MSG_KEXINIT sent Read from socket failed: Connection reset by peer And here's the server output: root@machine # /usr/lib/ssh/sshd -d -p 32222 debug1: sshd version Sun_SSH_1.1 debug1: read PEM private key done: type RSA debug1: private host key: #0 type 1 RSA debug1: read PEM private key done: type DSA debug1: private host key: #1 type 2 DSA debug1: Bind to port 32222 on ::. Server listening on :: port 32222. debug1: Bind to port 32222 on 0.0.0.0. Server listening on 0.0.0.0 port 32222. debug1: Server will not fork when running in debugging mode. Connection from 1.2.3.4 port 2652 debug1: Client protocol version 2.0; client software version OpenSSH_5.0 debug1: match: OpenSSH_5.0 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-Sun_SSH_1.1 debug1: list_hostkey_types: ssh-rsa,ssh-dss debug1: Failed to acquire GSS-API credentials for any mechanisms (No credentials were supplied, or the credentials were unavailable or inaccessible Unknown code 0 ) debug1: SSH2_MSG_KEXINIT sent Read from socket failed: Connection reset by peer debug1: Calling cleanup 0x4584c(0x0) This line seems a likely candidate: debug1: Failed to acquire GSS-API credentials for any mechanisms (No credentials were supplied, or the credentials were unavailable or inaccessible

    Read the article

  • visual studio 2010 database project, is there a visual way?

    - by b0x0rz
    started a visual studio 2010 database project. however i am only able to write sql in a text mode, there is no functionality in making the table for example in a visual view as exists when you add a new database to app_data folder and the work on it there. is this the only way and there is no visual way of doing this in the visual studio 2010 database project? or am i missing some obvious way of getting to it? thank you also if there is a tutorial anywhere (video maybe!?) please link it. i only found importing a database from an existing script video using a wizard. would like new database from scratch without wizard.

    Read the article

  • structured vs. unstructured data in db

    - by Igor
    the question is one of design. i'm gathering a big chunk of performance data with lots of key-value pairs. pretty much everything in /proc/cpuinfo, /proc/meminfo/, /proc/loadavg, plus a bunch of other stuff, from several hundred hosts. right now, i just need to display the latest chunk of data in my UI. i will probably end up doing some analysis of the data gathered to figure out performance problems down the road, but this is a new application so i'm not sure what exactly i'm looking for performance-wise just yet. i could structure the data in the db -- have a column for each key i'm gathering. the table would end up being O(100) columns wide, it would be a pain to put into the db, i would have to add new columns if i start gathering a new stat. but it would be easy to sort/analyze the data just using SQL. or i could just dump my unstructured data blob into the table. maybe three columns -- host id, timestamp, and a serialized version of my array, probably using JSON in a TEXT field. which should I do? am i going to be sorry if i go with the unstructured approach? when doing analysis, should i just convert the fields i'm interested in and create a new, more structured table? what are the trade-offs i'm missing here?

    Read the article

  • Synonym for "Many-to-Many" relationship (relational databases)

    - by Byron
    What's a synonym for a "many-to-many" relationship? I've finished writing an object-relational mapper but I'm still stumped as to what to name the function that adds that relation. addParent() and addChild() seemed quite logical for the many-to-one/one-to-many and addSuperclass() for one-to-one inheritance, but addManyToMany() would sound quite unintuitive to an object-oriented programmer. addSibling() or addCousin() doesn't really make sense either. Any suggestions? And before you dismiss this as a non-programming question, please remember that consistent naming schemes and encapsulation are pretty integral to programming :)

    Read the article

< Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >