Search Results

Search found 14148 results on 566 pages for '2008'.

Page 263/566 | < Previous Page | 259 260 261 262 263 264 265 266 267 268 269 270  | Next Page >

  • Thoughts on using Alpha Five v10 with Codeless AJAX for building an AJAX database app in a short amo

    - by william Hunter
    I need to build an AJAX application against our MS SQL Server database for my company. the app has to have user permissions and reporting and is pretty complex. I am really under the gun in terms of time. The company that I work for needs the app for an important project launch. A colleague/friend of mine in a different company recommended that I look at a product from Alpha Software called Alpha Five v10 with Codeless AJAX. He has told me that he has used it extensively and that it saves him a "serious boat load of time" and he says that he has not run into limitations because you can also write your own JavaScript or you wire in jQuery. Before I commit to Alpha Five v10, I would like to get any other opinions? Thanks. Norman Stern. Chicago

    Read the article

  • Trouble upgrading to .NET 4 with VS2008. What am I missing?

    - by Matt H.
    I downloaded the .NET 4 framework from miscrosoft here: http://www.microsoft.com/downloads/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992&displaylang=en I installed and rebooted. When I go to compile options -- target framework... .NET 4 isn't on the list. Is .net 4 not compatible with VS2008? It would be nice if Microsoft stated that somewhere...

    Read the article

  • Why does Sql Server recommends creating an index when it already exist?

    - by Pierre-Alain Vigeant
    I ran a very basic query against one of our table and I noticed that the execution plan query processor is recommending that we create an index on a column The query is SELECT SUM(DATALENGTH(Data)) FROM Item WHERE Namespace = 'http://some_url/some_namespace/' After running, I get the following message // The Query Processor estimates that implementing the following index could improve the query cost by 96.7211%. CREATE NONCLUSTERED INDEX [<Name of Missing Index, sysname,>] ON [dbo].[Item] ([Namespace]) My problem is that I already have such index on that column: CREATE NONCLUSTERED INDEX [IX_ItemNamespace] ON [dbo].[Item] ( [Namespace] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] Why is Sql Server recommending me to create such index when it already exist?

    Read the article

  • How can i use generic list with foreach?

    - by Phsika
    if i compiled below codes error return foreach loop how can i solve it? Error:Error 1 foreach statement cannot operate on variables of type 'Sortlist.MyCalisan' because 'Sortlist.MyCalisan' does not contain a public definition for 'GetEnumerator' C:\Users\yusuf.karatoprak\Desktop\ExcelToSql\TestExceltoSql\Sortlist\Program.cs 46 13 Sortlist static void EskiMetodlaListele() { MyCalisan myCalisan = new MyCalisan(); Calisan calisan = new Calisan(); calisan.Ad = "ali"; calisan.SoyAd = "abdullah"; calisan.ID = 1; myCalisan.list.Add(calisan); foreach (Calisan item in myCalisan) { Console.WriteLine(item.Ad.ToString()); } } } public class Calisan { public int ID { get; set; } public string Ad { get; set; } public string SoyAd { get; set; } } public class MyCalisan { public List<Calisan> list { get; set; } public MyCalisan() { list = new List(); } }

    Read the article

  • Backing Up Transaction Logs to Tape?

    - by David Stein
    I'm about to put my database in Full Recovery Model and start taking transaction log backups. I am taking a full nightly backup to another server and later in the evening this file and many others are backed up to tape. My question is this. I will take hourly (or more if necessary) t-log backups and store them on the other server as well. However, if my full backups are passing DBCC and integrity checks, do I need to put my T-Logs on tape? If someone wants point in time recovery to yesterday at 2pm, I would need the previous full backup and the transaction logs. However, other than that case, if I know my full back ups are good, is there value in keeping the previous day's transaction log backups?

    Read the article

  • How to force the build to be out of date, when a text file is modified?

    - by demoncodemonkey
    The Scenario My project has a post-build phase set up to run a batch file, which reads a text file "version.txt". The batch file uses the information in version.txt to inject the DLL with a version block using this tool. The version.txt is included in my project to make it easy to modify. It looks a bit like this: @set #Description="TankFace Utility Library" @set #FileVersion="0.1.2.0" @set #Comments="" Basically the batch file renames this file to version.bat, calls it, then renames it back to version.txt afterwards. The Problem When I modify version.txt (e.g. to increment the file version), and then press F7, the build is not seen as out-of-date, so the post-build step is not executed, so the DLL's version doesn't get updated. I really want to include the .txt file as an input to the build, but without anything actually trying to use it. If I #include the .txt file from a CPP file in the project, the compiler fails because it obviously doesn't understand what "@set" means. If I add /* ... */ comments around the @set commands, then the batch file has some syntax errors but eventually succeeds. But this is a poor solution I think. So... how would you do it?

    Read the article

  • Make Visual Studio to show All Compile Errors!

    - by BDotA
    Visual studio does not show all the compile errors at once. for example one time it says I have two errors and when I fix them then 102 more compile errors are showing up and these new errors are not dependent on those two previous errors. How can we tell it to go through all the code and show all compile errors at once

    Read the article

  • How to allow multiple users to manage application running on server?

    - by Mary-Chan
    I'm not sure if the title makes sense. Hard question to ask. I have an application running on a server under my network account, and it's scheduled to run daily. I can remote in with my user credentials and check on the application. What if I want more than one person to be able to remote in and check it? I can create a new account on the server, but it wouldn't have network rights and the application needs access to network folders. What would be the best approach? Thanks! :-) P.S. Feel free to edit the tags. I can't figure out what to pick.

    Read the article

  • Poor execution plans when using a filter and CONTAINSTABLE in a query

    - by Paul McLoughlin
    We have an interesting problem that I was hoping someone could help to shed some light on. At a high level the problem is as below: The following query executes quickly (1 second): SELECT SA.* FROM cg.SEARCHSERVER_ACTYS AS SA JOIN CONTAINSTABLE(CG.SEARCHSERVER_ACTYS, NOTE, 'reports') AS T1 ON T1.[Key]=SA.UNIQUE_ID but if we add a filter to the query, then it takes approximately 2 minutes to return: SELECT SA.* FROM cg.SEARCHSERVER_ACTYS AS SA JOIN CONTAINSTABLE(CG.SEARCHSERVER_ACTYS, NOTE, 'reports') AS T1 ON T1.[Key]=SA.UNIQUE_ID WHERE SA.CHG_DATE'19 Feb 2010' Looking at the execution plan for the two queries, I can see that in the second case there are two places where there are huge differences between the actual and estimated number of rows, these being: 1) For the FulltextMatch table valued function where the estimate is approx 22,000 rows and the actual is 29 million rows (which are then filtered down to 1670 rows before the join) and 2) For the index seek on the full text index, where the estimate is 1 row and the actual is 13,000 rows As a result of the estimates, the optimiser is choosing to use a nested loops join (since it assumes a small number of rows) hence the plan is inefficient. We can work around the problem by either (a) parameterising the query and adding an OPTION (OPTIMIZE FOR UNKNOWN) to the query or (b) by forcing a HASH JOIN to be used. In both of these cases the query returns in sub 1 second and the estimates appear reasonable. My question really is 'why are the estimates being used in the poorly performing case so wildly inaccurate and what can be done to improve them'? Statistics are up to date on the indexes on the indexed view being used here. Any help greatly appreciated.

    Read the article

  • Child sProc cannot reference a Local temp table created in parent sProc

    - by John Galt
    On our production SQL2000 instance, we have a database with hundreds of stored procedures, many of which use a technique of creating a #TEMP table "early" on in the code and then various inner stored procedures get EXECUTEd by this parent sProc. In SQL2000, the inner or "child" sProc have no problem INSERTing into #TEMP or SELECTing data from #TEMP. In short, I assume they can all refer to this #TEMP because they use the same connection. In testing with SQL2008, I find 2 manifestations of different behavior. First, at design time, the new "intellisense" feature is complaining in Management Studio EDIT of the child sProc that #TEMP is an "invalid object name". But worse is that at execution time, the invoked parent sProc fails inside the nested child sProc. Someone suggested that the solution is to change to ##TEMP which is apparently a global temporary table which can be referenced from different connections. That seems too drastic a proposal both from the amount of work to chase down all the problem spots as well as possible/probable nasty effects when these sProcs are invoked from web applications (i.e. multiuser issues). Is this indeed a change in behavior in SQL2005 or SQL2008 regarding #TEMP (local temp tables)? We skipped 2005 but I'd like to learn more precisely why this is occuring before I go off and try to hack out the needed fixes. Thanks.

    Read the article

  • jQuery - error updating JScript Intellisense

    - by BhejaFry
    Hello folks, i have searched similar posts on google & stackoverflow to no use. Basically, what i have is a new aspx page. I have the following files referenced in the head section. jquery-ui-1.7.2.custom.css jquery-1.3.2.min.js jquery-ui-1.7.2.custom.min.js Theme:smoothness UI Version: 1.7.2 for jQuery 1.3.2 I have nothing else going on in the page but still get the error 'error updating JScript Intellisense. I am using VS2008 SP1. Any ideas? TIA ps: VS shows intellisense if i refer to 1.4.1.js & 1.4.1-vsdoc.js

    Read the article

  • How can I solve out of memory exception in generic list generic ?

    - by Phsika
    How can i solve out of memory exception in list generic if adding new value foreach(DataColumn dc in dTable.Columns) foreach (DataRow dr in dTable.Rows) myScriptCellsCount.MyCellsCharactersCount.Add(dr[dc].ToString().Length); MyBase Class: public class MyExcelSheetsCells { public List<int> MyCellsCharactersCount { get; set; } public MyExcelSheetsCells() { MyCellsCharactersCount = new List<int>(); } }

    Read the article

  • VS2010: "Select Resource" dialog & resx location

    - by Dav
    Got two issues with the VS2010 / VS2008 select resource dialog - the one that appears when you want to add an image to a button in a WinForms app for example. Give me my files back! It only seems to see the default project resources file (Properties\Resources.resx), and resx files in project root (say MyProject\famfamfam.resx). We have quite a few icons all over the app, and because some of them come from different icon sets (like famfamfam), and some are related to this project only we'd like to keep them separate. For that same reason (keeping solution neat & tidy) we want to store these extra resource files in the Resources folder (eg. Resources\famfamfam.resx). However, we'd also like to keep using the Select Resource dialog :-) Because it does not see the 'extra' resource files, we're having to select a 'fake' icon now (from the global Resources.resx file) and then manually change that to reference the right icon in .Designer.cs. As you can imagine, this is a pain. Stop modifying my files! Second issue is a bit more annoying. We use the excellent MultiLang add-in for Visual Studio to globalize our app. It stores its translations in MultiLang.resx & MultiLang.XY.resx files in the project root, where XY is a language code, eg. .cs.resx for Czech. These have to be set to No code generation access modifier. What Select Resource seems to be doing is set all .resx files it can find to Internal. Exec summary Is there a way to convince Select Resource dialog to look for extra .resx files anywhere besides the project root? Is there any way to stop it from modifying the access modifier of the resources it does see (other than file a bug with MS)? Thanks in advance for any suggestions!

    Read the article

  • Troubleshooting failover cluster problem in W2K8 / SQL05

    - by paulland
    I have an active/passive W2K8 (64) cluster pair, running SQL05 Standard. Shared storage is on a HP EVA SAN (FC). I recently expanded the filesystem on the active node for a database, adding a drive designation. The shared storage drives are designated as F:, I:, J:, L: and X:, with SQL filesystems on the first 4 and X: used for a backup destination. Last night, as part of a validation process (the passive node had been offline for maintenance), I moved the SQL instance to the other cluster node. The database in question immediately moved to Suspect status. Review of the system logs showed that the database would not load because the file "K:\SQLDATA\whatever.ndf" could not be found. (Note that we do not have a K: drive designation.) A review of the J: storage drive showed zero contents -- nothing -- this is where "whatever.ndf" should have been. Hmm, I thought. Problem with the server. I'll just move SQL back to the other server and figure out what's wrong.. Still no database. Suspect. Uh-oh. "Whatever.ndf" had gone into the bit bucket. I finally decided to just restore from the backup (which had been taken immediately before the validation test), so nothing was lost but a few hours of sleep. The question: (1) Why did the passive node think the whatever.ndf files were supposed to go to drive "K:", when this drive didn't exist as a resource on the active node? (2) How can I get the cluster nodes "re-syncd" so that failover can be accomplished? I don't know that there wasn't a "K:" drive as a cluster resource at some time in the past, but I do know that this drive did not exist on the original cluster at the time of resource move.

    Read the article

  • Can I use the auto-generated Linq-to-SQL entity classes in 'disconnected' mode?

    - by Gary McGill
    Suppose I have an automatically-generated Employee class based on the Employees table in my database. Now suppose that I want to pass employee data to a ShowAges method that will print out name & age for a list of employees. I'll retrieve the data for a given set of employees via a linq query, which will return me a set of Employee instances. I can then pass the Employee instances to the ShowAges method, which can access the Name & Age fields to get the data it needs. However, because my Employees table has relationships with various other tables in my database, my Employee class also has a Department field, a Manager field, etc. that provide access to related records in those other tables. If the ShowAges method were to invoke any of those methods, this would cause lots more data to be fetched from the database, on-demand. I want to be sure that the ShowAges method only uses the data I have already fetched for it, but I really don't want to have to go to the trouble of defining a new class which replicates the Employee class but has fewer methods. (In my real-world scenario, the class would have to be considerably more complex than the Employee class described here; it would have several 'joined' classes that do need to be populated, and others that don't). Is there a way to 'switch off' or 'disconnect' the Employees instances so that an attempt to access any property or related object that's not already populated will raise an exception? If not, then I assume that since this must be a common requirement, there might be an already-established pattern for doing this sort of thing?

    Read the article

  • Strange behaviour with Microsoft.WindowsCE.Forms

    - by Mohammadreza
    I have a Windows Mobile application in which I want to check the device orientation. Therefore I wrote the following property in one of my forms: internal static Microsoft.WindowsCE.Forms.ScreenOrientation DeviceOriginalOrientation { get; private set; } The strange thing is that after that whenever I open a UserControl, the designer shows this warning even if that UserControl doesn't use the property: Could not load file or assembly 'Microsoft.WindowsCE.Forms, Version=3.5.0.0, Culture=neutral, PublicKeyToken=969db8053d3322ac' or one of its dependencies. The located assembly's manifest definition does not match the assembly reference. (Exception from HRESULT: 0x80131040) Commenting the above property will dismiss the warning and shows the user control again. The application is built successfully and runs without any problems in both cases. Does anybody know why this happens and how can I fix it?

    Read the article

  • thoughs on using Alpha Five v10 with Codeless AJAX for building an AJAX database app in a short amou

    - by william Hunter
    I need to build an AJAX application against our MS SQL Server database for my company. the app has to have user permissions and reporting and is pretty complex. I am really under the gun in terms of time. The company that I work for needs the app for an important project launch. A colleague/friend of mine in a different company recommended that I look at a product from Alpha Software called Alpha Five v10 with Codeless AJAX. He has told me that he has used it extensively and that it saves him a "serious boat load of time" and he says that he has not run into limitations because you can also write your own JavaScript or you wire in JQERY. Before I commit to Alpha Five v10, I would like to get any other opinions? Thanks. Norman Stern. Chicago

    Read the article

  • permutations gone wrong

    - by vbNewbie
    I have written code to implement an algorithm I found on string permutations. What I have is an arraylist of words ( up to 200) and I need to permutate the list in levels of 5. Basically group the string words in fives and permutated them. What I have takes the first 5 words generates the permutations and ignores the rest of the arraylist? Any ideas appreciated. Private Function permute(ByVal chunks As ArrayList, ByVal k As Long) As ArrayList ReDim ItemUsed(k) pno = 0 Permutate(k, 1) Return chunks End Function Private Shared Sub Permutate(ByVal K As Long, ByVal pLevel As Long) Dim i As Long, Perm As String Perm = pString ' Save the current Perm ' for each value currently available For i = 1 To K If Not ItemUsed(i) Then If pLevel = 1 Then pString = chunks.Item(i) 'pString = inChars(i) Else pString = pString & chunks.Item(i) 'pString += inChars(i) End If If pLevel = K Then 'got next Perm pno = pno + 1 SyncLock outfile outfile.WriteLine(pno & " = " & pString & vbCrLf) End SyncLock outfile.Flush() Exit Sub End If ' Mark this item unavailable ItemUsed(i) = True ' gen all Perms at next level Permutate(K, pLevel + 1) ' Mark this item free again ItemUsed(i) = False ' Restore the current Perm pString = Perm End If Next K above is = to 5 for the number of words in one permutation but when I change the for loop to the arraylist size I get an error of index out of bounds

    Read the article

  • How to get parameter values for dm_exec_sql_text

    - by Ted Elliott
    I'm running the following statement to see what queries are executing in sql server: select * from sys.dm_exec_requests r cross apply sys.dm_exec_sql_text(r.sql_handle) where r.database_id = DB_ID('<dbname>') The sql text that comes back is parameterized: (@Parm0 int) select * from foo where foo_id = @Parm0 Is there any way to get the values for the parameters that the statement is using? Say by joining to another table perhaps?

    Read the article

  • Is it possible to create a new T-SQL Operator using CLR Code in MSSQL?

    - by Eoin Campbell
    I have a very simple CLR Function for doing Regex Matching public static SqlBoolean RegExMatch(SqlString input, SqlString pattern) { if (input.IsNull || pattern.IsNull) return SqlBoolean.False; return Regex.IsMatch(input.Value, pattern.Value, RegexOptions.IgnoreCase); } It allows me to write a SQL Statement Like. SELECT * FROM dbo.table1 WHERE dbo.RegexMatch(column1, '[0-9][A-Z]') = 1 -- match entries in col1 like 1A, 2B etc... I'm just thinking it would be nice to reformulate that query so it could be called like SELECT * FROM dbo.table1 WHERE column1 REGEXLIKE '[0-9][A-Z]' Is it possible to create new comparison operators using CLR Code. (I'm guessing from my brief glance around the web that the answer is NO, but no harm asking) Thanks, Eoin C

    Read the article

  • problem with treeview

    - by Xaver
    I want to configure a treeview so that when all checkboxes of a parent are checked, then the parent checkbox is checked. And when all checkboxes are unchecked, the parent checkbox is unchecked. Does the treeview class have a standard property for that?

    Read the article

< Previous Page | 259 260 261 262 263 264 265 266 267 268 269 270  | Next Page >