Search Results

Search found 3343 results on 134 pages for 'sqlserver 2000'.

Page 76/134 | < Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >

  • jQuery fadeIn leaves text not anti-aliased in IE7

    - by cdillon
    Why does this happen? Any workarounds? Example: http://chrisdillon.us/jquery_fadein_problem1.html jQuery: $(function() { $('p.quote').fadeIn(2000); }); HTML: <p>someone said:</p> <p class="quote">&ldquo;lorem ipsum&rdquo;</p> <p>someone else said:</p> <p class="quote" style="display: none;">&ldquo;magna carta&rdquo;</p>

    Read the article

  • Cobol: science and fiction

    - by user847
    There are a few threads about the relevance of the Cobol programming language on this forum, e.g. this thread links to a collection of them. What I am interested in here is a frequently repeated claim based on a study by Gartner from 1997: that there were around 200 billion lines of code in active use at that time! I would like to ask some questions to verify or falsify a couple of related points. My goal is to understand if this statement has any truth to it or if it is totally unrealistic. I apologize in advance for being a little verbose in presenting my line of thought and my own opinion on the things I am not sure about, but I think it might help to put things in context and thus highlight any wrong assumptions and conclusions I have made. Sometimes, the "200 billion lines" number is accompanied by the added claim that this corresponded to 80% of all programming code in any language in active use. Other times, the 80% merely refer to so-called "business code" (or some other vague phrase hinting that the reader is not to count mainstream software, embedded systems or anything else where Cobol is practically non-existent). In the following I assume that the code does not include double-counting of multiple installations of the same software (since that is cheating!). In particular in the time prior to the y2k problem, it has been noted that a lot of Cobol code is already 20 to 30 years old. That would mean it was written in the late 60ies and 70ies. At that time, the market leader was IBM with the IBM/370 mainframe. IBM has put up a historical announcement on his website quoting prices and availability. According to the sheet, prices are about one million dollars for machines with up to half a megabyte of memory. Question 1: How many mainframes have actually been sold? I have not found any numbers for those times; the latest numbers are for the year 2000, again by Gartner. :^( I would guess that the actual number is in the hundreds or the low thousands; if the market size was 50 billion in 2000 and the market has grown exponentially like any other technology, it might have been merely a few billions back in 1970. Since the IBM/370 was sold for twenty years, twenty times a few thousand will result in a couple of ten-thousands of machines (and that is pretty optimistic)! Question 2: How large were the programs in lines of code? I don't know how many bytes of machine code result from one line of source code on that architecture. But since the IBM/370 was a 32-bit machine, any address access must have used 4 bytes plus instruction (2, maybe 3 bytes for that?). If you count in operating system and data for the program, how many lines of code would have fit into the main memory of half a megabyte? Question 3: Was there no standard software? Did every single machine sold run a unique hand-coded system without any standard software? Seriously, even if every machine was programmed from scratch without any reuse of legacy code (wait ... didn't that violate one of the claims we started from to begin with???) we might have O(50,000 l.o.c./machine) * O(20,000 machines) = O(1,000,000,000 l.o.c.). That is still far, far, far away from 200 billion! Am I missing something obvious here? Question 4: How many programmers did we need to write 200 billion lines of code? I am really not sure about this one, but if we take an average of 10 l.o.c. per day, we would need 55 million man-years to achieve this! In the time-frame of 20 to 30 years this would mean that there must have existed two to three million programmers constantly writing, testing, debugging and documenting code. That would be about as many programmers as we have in China today, wouldn't it? Question 5: What about the competition? So far, I have come up with two things here: 1) IBM had their own programming language, PL/I. Above I have assumed that the majority of code has been written exclusively using Cobol. However, all other things being equal I wonder if IBM marketing had really pushed their own development off the market in favor of Cobol on their machines. Was there really no relevant code base of PL/I? 2) Sometimes (also on this board in the thread quoted above) I come across the claim that the "200 billion lines of code" are simply invisible to anybody outside of "governments, banks ..." (and whatnot). Actually, the DoD had funded their own language in order to increase cost effectiveness and reduce the proliferation of programming language. This lead to their use of Ada. Would they really worry about having so many different programming languages if they had predominantly used Cobol? If there was any language running on "government and military" systems outside the perception of mainstream computing, wouldn't that language be Ada? I hope someone can point out any flaws in my assumptions and/or conclusions and shed some light on whether the above claim has any truth to it or not.

    Read the article

  • Cross-database transactions from one SP

    - by Michael Bray
    I need to update multiple databases with a few simple SQL statement. The databases are configurared in SQL using 'Linked Servers', and the SQL versions are mixed (SQL 2008, SQL 2005, and SQL 2000). I intend to write a stored procedure in one of the databases, but I would like to do so using a transaction to make sure that each database gets updated consistently. Which of the following is the most accurate: Will a single BEGIN/COMMIT TRANSACTION work to guarantee that all statements across all databases are successful? Will I need multiple BEGIN TRANSACTIONS for each individual set of commands on a database? Are transactions even supported when updating remote databases? I would need to execute a remote SP with embedded transaction support. Note that I don't care about any kind of cross-database referential integrity; I'm just trying to update multiple databases at the same time from a single stored procedure if possible. Any other suggestions are welcome as well. Thanks!

    Read the article

  • How can I execute a .sql from C#?

    - by J. Pablo Fernández
    For some integration tests I want to connect to the database and run a .sql file that has the schema needed for the tests to actually run, including GO statements. How can I execute the .sql file? (or is this totally the wrong way to go?) I've found a post in the MSDN forum showing this code: using System.Data.SqlClient; using System.IO; using Microsoft.SqlServer.Management.Common; using Microsoft.SqlServer.Management.Smo; namespace ConsoleApplication1 { class Program { static void Main(string[] args) { string sqlConnectionString = "Data Source=(local);Initial Catalog=AdventureWorks;Integrated Security=True"; FileInfo file = new FileInfo("C:\\myscript.sql"); string script = file.OpenText().ReadToEnd(); SqlConnection conn = new SqlConnection(sqlConnectionString); Server server = new Server(new ServerConnection(conn)); server.ConnectionContext.ExecuteNonQuery(script); } } } but on the last line I'm getting this error: System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. --- System.TypeInitializationException: The type initializer for '' threw an exception. --- .ModuleLoadException: The C++ module failed to load during appdomain initialization. --- System.DllNotFoundException: Unable to load DLL 'MSVCR80.dll': The specified module could not be found. (Exception from HRESULT: 0x8007007E). I was told to go and download that DLL from somewhere, but that sounds very hacky. Is there a cleaner way to? Is there another way to do it? What am I doing wrong? I'm doing this with Visual Studio 2008, SQL Server 2008, .Net 3.5SP1 and C# 3.0.

    Read the article

  • "java.lang.ArrayIndexOutOfBoundsException" with System.arraycopy()

    - by Noona
    These few lines of code are giving me a "java.lang.ArrayIndexOutOfBoundsException" exception, could someone please take a look and point out why (the exception is caused in the second arraycopy() call): byte [] newContentBytes = EntityUtils.toByteArray((serverResponse.getEntity())); newContent = new String(newContentBytes); System.out.println( newContent); byte [] headerBytes = headers.getBytes(); byte[] res = new byte[newContentBytes.length + headerBytes.length]; //headerBytes. System.arraycopy(headerBytes, 0, res, 0, headerBytes.length); System.out.println( "length: " + newContentBytes.length); System.arraycopy(newContentBytes, 0, res, newContentBytes.length , newContentBytes.length); The problem is in allocating res size, for example if I write new byte[newContentBytes.length + headerBytes.length+ 2000] instead the exception doesn't occur, so what should the accurate size be?

    Read the article

  • How do I setup Linq to SQL and WCF

    - by Jisaak
    So I'm venturing out into the world of Linq and WCF web services and I can't seem to make the magic happen. I have a VERY basic WCF web service going and I can get my old SqlConnection calls to work and return a DataSet. But I can't/don't know how to get the Linq to SQL queries to work. I'm guessing it might be a permissions problem since I need to connect to the SQL Database with a specific set of credentials but I don't know how I can test if that is the issue. I've tried using both of these connection strings and neither seem to give me a different result. <add name="GeoDataConnectionString" connectionString="Data Source=SQLSERVER;Initial Catalog=GeoData;Integrated Security=True" providerName="System.Data.SqlClient" /> <add name="GeoDataConnectionString" connectionString="Data Source=SQLSERVER;Initial Catalog=GeoData;User ID=domain\userName; Password=blahblah; Trusted_Connection=true" providerName="System.Data.SqlClient" /> Here is the function in my service that does the query and I have the interface add the [OperationContract] public string GetCity(int cityId) { GeoDataContext db = new GeoDataContext(); var city = from c in db.Cities where c.CITY_ID == 30429 select c.DESCRIPTION; return city.ToString(); } The GeoData.dbml only has one simple table in it with a list of city id's and city names. I have also changed the "Serialization Mode" on the DataContext to "Unidirectional" which from what I've read needs to be done for WCF. When I run the service I get this as the return: SELECT [t0].[DESCRIPTION] FROM [dbo].[Cities] AS [t0] WHERE [t0].[CITY_ID] = @p0 Dang, so as I'm writing this I realize that maybe my query is all messed up?

    Read the article

  • Softrware Licensing / Registration component/framework?

    - by Clay Nichols
    We use a home-grown Registration System for our software but I'd like to update it fixing a number of things including adding the ability to remotely activate/deactivate it (to facilitate Saas). Feel free to suggest any good (in your opinion) VB6- compatible option. I can check out whether it meets our other criteria below. Required Features: Activate multiple programs (Ok if it generates a separate code for each one) Works with VB6 and VB.net. A VB6-compatible DLL should be fine. Still supported (nice to have but not absolutely required Compatible with Windows 2000 through 7. Nice-to-have features (but not required) * Work without internet access * Works through a firewall (this may be a tough one) Any suggestions?

    Read the article

  • Fade between looped background images using jQuery

    - by da5id
    I'm trying to get the background image of a legacy div (by which I mean it already has a background image, which I cannot control & thus have to initially over-write) to smoothly fade between new images indefinitely. Here's what I have so far: var images = [ "/images/home/19041085158.jpg", "/images/home/19041085513.jpg", "/images/home/19041085612.jpg" ]; var counter = 0; setInterval(function() { $(".home_banner").css('backgroundImage', 'url("'+images[counter]+'")'); counter++; if (counter == images.length) { counter = 0; } }, 2000); Trouble is, it's not smooth (I'm aiming for something like the innerfade plugin). EDIT: question originally said "and it's not indefinite (it only runs once through the array).", but Mario corrected my stupid naming error. EDIT2: I'm now using Reigel's answer (see below), which works perfectly, but I still can't find any way to fade between the images smoothly. All help greatfully appreciated :)

    Read the article

  • Load first page on splashscreen using phonegap on android

    - by Syg
    Hi i'm using phonegap in conjunction with Jquery mobile. I'm trying to immediately fetch the main page, while showing the user a splashscreen. In PhoneGap for Android i'm using this super.setIntegerProperty("splashscreen", R.drawable.splash); super.loadUrl("file:///android_asset/www/index.html", 2000); While this loads the splash, it also delays the loading of index.html. It it possible to start fetching it right away? Also, if not with phonegap, has anybody done this using JQM instead of phonegap? UPDATE: After using it with a slower loading first page (doing a json request) it kinda looks like the splash screens shows for a longer period of time, so this appears to be the default behavior

    Read the article

  • Random Number on SQL without using NewID()

    - by Angel Escobedo
    Hello I want to generate a Unique Random number with out using the follow statement : Convert(int, (CHECKSUM(NEWID()))*100000) AS [ITEM] Cause when I use joins clauses on "from" it generates double registers by using NEWID() Im using SQL Server 2000 *PD : When I use Rand() it probably repeat on probability 1 of 100000000 but this is so criticall so it have to be 0% of probability to repeat a random value generated My Query with NewID() and result on SELECT statement is duplicated (x2) My QUery without NewID() and using Rand() on SELECT statement is single (x1) but the probability of repeat the random value generated is uncertainly but exists! Thanks!

    Read the article

  • Is it possible to use SqlGeography with Linq to Sql?

    - by cofiem
    I've been having quite a few problems trying to use Microsoft.SqlServer.Types.SqlGeography. I know full well that support for this in Ling to Sql is not great. I've tried numerous ways, beginning with what would the expected way (Database type of geography, CLR type of SqlGeography). This produces the NotSupportedException, which is widely discussed via blogs. I've then gone down the path of treating the geography column as a varbinary(max), as geography is a UDT stored as binary. This seems to work fine (with some binary reading and writing extension methods). However, I'm now running into a rather obscure issue, which does not seem to have happened to many other people. System.InvalidCastException: Unable to cast object of type 'Microsoft.SqlServer.Types.SqlGeography' to type 'System.Byte[]'. This error is thrown from an ObjectMaterializer when iterating through a query. It seems to only occur when the tables containing geography columns are included in a query implicitly (ie. using the EntityRef<> properties to do joins). System.Data.Linq.SqlClient.ObjectReaderCompiler.ObjectReader`2.MoveNext() My question: If I'm retrieving the geography column as varbinary(max), I might expect the reverse error: can't cast byte[] to SqlGeography. That I would understand. This I don't. I do have some properies on the partial LINQ to SQL classes that hide the binary conversion... could those be the issue? Any help appreciated, and I know there's probably not enough information.

    Read the article

  • SQL Server 2008 - Script Data as Insert Statements from SSIS Package

    - by Brandon King
    SQL Server 2008 provides the ability to script data as Insert statements using the Generate Scripts option in Management Studio. Is it possible to access the same functionality from within a SSIS package? Here's what I'm trying to accomplish... I have a scheduled job that nightly scripts out all the schema and data for a SQL Server 2008 database and then uses the script to create a "mirror copy" SQLCE 3.5 database. I've been using Narayana Vyas Kondreddi's sp_generate_inserts stored procedure to accomplish this, but it has problems with some datatypes and greater-than-4K columns (holdovers from SQL Server 2000 days). The Script Data function looks like it could solve my problems, if only I could automate it. Any suggestions?

    Read the article

  • How to insert bulk data into mysql table from asp.net at once.

    - by kranthi
    Hi, I have a requirement that I need to read an excel sheet using asp.net/C# and insert all the records into mysql table.The excel sheet consists of around 2000 rows and 50 columns. Currently,upon reading the excel records ,I am inserting the records one by one using a prepare statement into mysql table.But its taking around 70 secs to do so because of the huge data. I've also thought of creating a new datarow, assigning values to each cell,adding the resulting datarow to datatable and finally calling dataadapter.update(...).But it seems to be complex because I got around 50 columns and hence I'll have to assign 50 values to the datarow. Could someone please suggest if there is an alternate to improve the performance of the insertion? Thanks

    Read the article

  • The width of a list that contains an embedded matrix grows unexpectedly

    - by Greg Lorenz
    I have a report in reporting services 2005 that includes a list with an embedded matrix and am attempting to put a border on the list, however, when I run the report in visual studio the matrix is growing past the end of the page and therefore the boarder is growing with it causing it to grow into the second page. It appears that there was supposedly a fix for this in reporting services 2000 service pack 1 but I am still experianceing the issue in 2005. The list has a details group that limits the records on a row to 4 using the following expression: =ceiling(rownumber("list1")/4), the matrix has a column group that should recycle those based on the rownumber determined by the list using the following expression: =rowNumber("list1_Details_Group") I have also attempted to put the list in a rectangle in hopes to stop the matrix from growing to no avail. How do I effectivly stop the matrix form growing past the space allowed by the list control?

    Read the article

  • How to control order of assignment for new identity column in mssql ?

    - by alpav
    I have a table with CreateDate datetime field default(getdate()) that does not have any identity column. I would like to add identity(1,1) field that would reflect same order of existing records as CreateDate field (order by would give same results). How can I do that ? I guess if I create clustered key on CreateDate field and then add identity column it will work (not sure if it's guaranteed), is there a good/better way ? I am interested in sql 2005, but I guess the answer will be the same for sql 2008, sql 2000.

    Read the article

  • Oracle - Is there any effects of not having a primary key on a table ?

    - by Sathya
    We use sequence numbers for primary keys on the tables. There are some tables where we dont really use the primary key for any querying purpose. But, we have Indexes on other columns. These are non-unique indexes. The queries use these non-primary key columns in the WHERE conditions. So, I dont really see any benefit of having a primary key on such tables. My experience with SQL 2000 was that, it used to replicate tables which had some primary key. Otherwise it would not. I am using Oracle 10gR2. I would like to know if there are any such side-effects of having tables that dont have primary key.

    Read the article

  • Intermittent "No Database Selected" in PHP/MySQL?

    - by ANE
    Have a PHP/MySQL form with a dropdown box containing a list of 350 names. When any random name is selected, sometimes it works & displays info about that name from the database, and sometimes the form gives the error "No Database Selected". Here's what I've tried, pretty much grasping at straws as I'm not a programmer: Increasing max_connections in /etc/my.cnf from 200 to 2000 (even though only 4-5 connections are made and it's a lightly used server) Changing mysql_pconnect to mysql_connect Adding the word true to this connection string: $mysql = mysql_pconnect($hostname_mysql, $username_mysql, $password_mysql, true) or trigger_error(mysql_error(),E_USER_ERROR); Changing the word require_once to require on this line: [?php require('/home/user/Connections/mysql.php'); ?] Enabling MySQL & PHP query & error logging. (no errors logged) Here is the code: [removed old bad code] Update: Working answer from Rob Apodaca below.

    Read the article

  • java tool for debugging

    - by user269723
    Hi experts, Currently we are studying the java based tool which is primararily Reporting tool.It was developed in 2000/2001 period and uses many open source libraries like Apache Avalon/Mx4J.Adaptor/edu.Oswego(java concurrent package) etc. Tool uses jdk 1.3.1 and goal is to upgrade to jdk 1.5.We have also been asked to remove these 'outdated' packages and replace by standard java packages if possible. Unfortunately we have the code available for study but lacks any documentation and really difficult to track the flow(Total number of classes written might be more than 1000) during debugging. Whats the best way to understand this kind of tool? any Graphical tool to see the relationship between the classes? Thanks SR

    Read the article

  • Configuring the expiry time for the messages destined to the "Expired message address" in Hornetq

    - by Rohit
    I have configured a message expiry destination in Hornetq as below <address-setting match="#"> <dead-letter-address>jms.queue.error</dead-letter-address> <expiry-address>jms.queue.error</expiry-address> <max-delivery-attempts>3</max-delivery-attempts> <redelivery-delay>2000</redelivery-delay> <max-size-bytes>10485760</max-size-bytes> <message-counter-history-day-limit>10</message-counter-history-day-limit> <address-full-policy>BLOCK</address-full-policy> <redistribution-delay>60000</redistribution-delay> </address-setting> And the messages do get redirected to the expiry address once the expiration time is exceeded. These messages live indefinitely on the expiry address, Is there a way to provide a expiry time for these messages so they live only limited time on the expiry address?

    Read the article

  • What's a reasonable number of rows and tables to be able to join in MySQL?

    - by Philip Brocoum
    I have one table that maps locations to postal codes. For example, New York State has about 2000 postal codes. I have another table that maps mail to the postal codes it was sent to, but this table has about 5 million rows. I want to find all the mail that was sent to New York State, which seems simple enough, but the query is unbelievably slow. I haven't been able to even wait long enough for it to finish. Is the problem that there are 5 million rows? I can't help but think that 5 million shouldn't be such a large number for a computer these days... Oh, and everything is indexed. Is SQL just not designed to handle such large joins?

    Read the article

  • C lang. -- Error: Segmentaion fault

    - by user233542
    I don't understand why this would give me a seg fault. Any ideas? this is the function that returns the signal to stop the program: (below is the other function that is called within this) double bisect(double A0,double A1,double Sol[N],double tol,double c) { double Amid,shot; while (A1-A0 tol) { Amid = 0.5*(A0+A1); shot = shoot(Sol, Amid, c); if (shot==2.*Pi) { return Amid; } if (shot > 2.*Pi){ A1 = Amid; } else if (shot < 2.*Pi){ A0 = Amid; } } return 0.5*(A1+A0); } double shoot(double Sol[N],double A,double c) { int i,j; /Initial Conditions/ for (i=0;i for (i=buff+2;i return Sol[i-1]; } buff, l, N are defined using a #deine statement. l = 401, buff = 50, N = 2000 Thanks

    Read the article

  • Visual Studio massive renaming of controls in WPF project

    - by Mark
    I have a scene in my WPF project with about 2000 different user controls: <local:MyControlType x:name="aaka4sn9f" /> <local:MyControlType x:name="aaks22nf_1" /> <local:MyControlType x:name="aa66s2f_2" /> and so on down the screen... Is there a way to rename all of these controls using Visual Studio (and possibly a regex) to be: <local:MyControlType x:name="myControl1" /> <local:MyControlType x:name="myControl2" /> <local:MyControlType x:name="myControl3" /> ?? Thanks a lot!

    Read the article

  • How can i get all TD data upon clicking on a TR with Jquery

    - by KillerFish
    Hi friends, I am having a table with multiple rows and i want to get all the TD data after clicking on a particular ROW. My Table is <table> <tr class="person"> <td class="id">1900</td> <td class="name">John</td> <td class="gender">Male</td> </tr> <tr class="person"> <td class="id">2000</td> <td class="name">Pitt</td> <td class="gender">Female</td> </tr> </table> How can i Get id, name, gender after clicking on Row using Jquery. Ex: If i click on John Row i should get 1900, John, Male and same for Pitt also Thank You

    Read the article

  • Three State Treeview Windows Forms

    - by alhambraeidos
    Hello everybody people, One of the frustrations with the standard TreeView is that you can't show a partial selection to indicate that some of the children are selected. A partial selection in a complex tree allows the user to easily determine where the selections are, even if the tree is not completely expanded. A similar idea is used in manu backup programs to allow the user to select the files to be backed up. Does anyone know of a way to make the checks in a treeview checkbox gray? I want to gray the check of a parent node when some (but not all) of its child nodes are checked. Kinda like what you see when you are selecting options to install during a software installation. I have seen several post, but is older (2000, 2007 year). I now, in 2010, the better answer possible is required. I use VS 2008 . Net 3.5 , Windows Forms Thanks in advanced, greetings

    Read the article

  • How do I keep a scanner from throwing exceptions when the wrong type is entered? (java)

    - by David
    Here's some sample code: import java.util.Scanner; class In { public static void main (String[]arg) { Scanner in = new Scanner (System.in) ; System.out.println ("how many are invading?") ; int a = in.nextInt() ; System.out.println (a) ; } } if i run the program and give it an int like 4then everything goes fine. if, on the other hand, i answer too many it doesn't laugh at my funny joke. instead i get this: (as expected) Exception in thread "main" java.util.InputMismatchException at java.util.Scanner.throwFor(Scanner.java:819) at java.util.Scanner.next(Scanner.java:1431) at java.util.Scanner.nextInt(Scanner.java:2040) at java.util.Scanner.nextInt(Scanner.java:2000) at In.main(In.java:9) is there a way so that i can make it so that it either ignores entries that aren't ints or re prompts with "how many are invading?"? i'd like to know how to do both of these.

    Read the article

< Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >