Search Results

Search found 49554 results on 1983 pages for 'database users'.

Page 740/1983 | < Previous Page | 736 737 738 739 740 741 742 743 744 745 746 747  | Next Page >

  • Full-text Indexing Books Online

    - by Most Valuable Yak (Rob Volk)
    While preparing for a recent SQL Saturday presentation, I was struck by a crazy idea (shocking, I know): Could someone import the content of SQL Server Books Online into a database and apply full-text indexing to it?  The answer is yes, and it's really quite easy to do. The first step is finding the installed help files.  If you have SQL Server 2012, BOL is installed under the Microsoft Help Library.  You can find the install location by opening SQL Server Books Online and clicking the gear icon for the Help Library Manager.  When the new window pops up click the Settings link, you'll get the following: You'll see the path under Library Location. Once you navigate to that path you'll have to drill down a little further, to C:\ProgramData\Microsoft\HelpLibrary\content\Microsoft\store.  This is where the help file content is kept if you downloaded it for offline use. Depending on which products you've downloaded help for, you may see a few hundred files.  Fortunately they're named well and you can easily find the "SQL_Server_Denali_Books_Online_" files.  We are interested in the .MSHC files only, and can skip the Installation and Developer Reference files. Despite the .MHSC extension, these files are compressed with the standard Zip format, so your favorite archive utility (WinZip, 7Zip, WinRar, etc.) can open them.  When you do, you'll see a few thousand files in the archive.  We are only interested in the .htm files, but there's no harm in extracting all of them to a folder.  7zip provides a command-line utility and the following will extract to a D:\SQLHelp folder previously created: 7z e –oD:\SQLHelp "C:\ProgramData\Microsoft\HelpLibrary\content\Microsoft\store\SQL_Server_Denali_Books_Online_B780_SQL_110_en-us_1.2.mshc" *.htm Well that's great Rob, but how do I put all those files into a full-text index? I'll tell you in a second, but first we have to set up a few things on the database side.  I'll be using a database named Explore (you can certainly change that) and the following setup is a fragment of the script I used in my presentation: USE Explore; GO CREATE SCHEMA help AUTHORIZATION dbo; GO -- Create default fulltext catalog for later FT indexes CREATE FULLTEXT CATALOG FTC AS DEFAULT; GO CREATE TABLE help.files(file_id int not null IDENTITY(1,1) CONSTRAINT PK_help_files PRIMARY KEY, path varchar(256) not null CONSTRAINT UNQ_help_files_path UNIQUE, doc_type varchar(6) DEFAULT('.xml'), content varbinary(max) not null); CREATE FULLTEXT INDEX ON help.files(content TYPE COLUMN doc_type LANGUAGE 1033) KEY INDEX PK_help_files; This will give you a table, default full-text catalog, and full-text index on that table for the content you're going to insert.  I'll be using the command line again for this, it's the easiest method I know: for %a in (D:\SQLHelp\*.htm) do sqlcmd -S. -E -d Explore -Q"set nocount on;insert help.files(path,content) select '%a', cast(c as varbinary(max)) from openrowset(bulk '%a', SINGLE_CLOB) as c(c)" You'll need to copy and run that as one line in a command prompt.  I'll explain what this does while you run it and watch several thousand files get imported: The "for" command allows you to loop over a collection of items.  In this case we want all the .htm files in the D:\SQLHelp folder.  For each file it finds, it will assign the full path and file name to the %a variable.  In the "do" clause, we'll specify another command to be run for each iteration of the loop.  I make a call to "sqlcmd" in order to run a SQL statement.  I pass in the name of the server (-S.), where "." represents the local default instance. I specify -d Explore as the database, and -E for trusted connection.  I then use -Q to run a query that I enclose in double quotes. The query uses OPENROWSET(BULK…SINGLE_CLOB) to open the file as a data source, and to treat it as a single character large object.  In order for full-text indexing to work properly, I have to convert the text content to varbinary. I then INSERT these contents along with the full path of the file into the help.files table created earlier.  This process continues for each file in the folder, creating one new row in the table. And that's it! 5 SQL Statements and 2 command line statements to unzip and import SQL Server Books Online!  In case you're wondering why I didn't use FILESTREAM or FILETABLE, it's simply because I haven't learned them…yet. I may return to this blog after I figure that out and update it with the steps to do so.  I believe that will make it even easier. In the spirit of exploration, I'll leave you to work on some fulltext queries of this content.  I also recommend playing around with the sys.dm_fts_xxxx DMVs (I particularly like sys.dm_fts_index_keywords, it's pretty interesting).  There are additional example queries in the download material for my presentation linked above. Many thanks to Kevin Boles (t) for his advice on (re)checking the content of the help files.  Don't let that .htm extension fool you! The 2012 help files are actually XML, and you'd need to specify '.xml' in your document type column in order to extract the full-text keywords.  (You probably noticed this in the default definition for the doc_type column.)  You can query sys.fulltext_document_types to get a complete list of the types that can be full-text indexed. I also need to thank Hilary Cotter for giving me the original idea. I believe he used MSDN content in a full-text index for an article from waaaaaaaaaaay back, that I can't find now, and had forgotten about until just a few days ago.  He is also co-author of Pro Full-Text Search in SQL Server 2008, which I highly recommend.  He also has some FTS articles on Simple Talk: http://www.simple-talk.com/sql/learn-sql-server/sql-server-full-text-search-language-features/ http://www.simple-talk.com/sql/learn-sql-server/sql-server-full-text-search-language-features,-part-2/

    Read the article

  • Static Member Variables of the Same Class in C++

    - by helixed
    I'm trying to create a class which contains a static pointer to an instance of itself. Here's an example: A.h: #include <iostream> #ifndef _A_H #define _A_H class A { static A * a; }; A * a = NULL; #endif However, when I include A.h in another file, such as: #include "A.h" class B { }; I get the following error: ld: duplicate symbol _a in /Users/helixed/Desktop/Example/build/Example.build/Debug/Example.build/Objects-normal/x86_64/B.o and /Users/helixed/Desktop/Example/build/Example.build/Debug/Examplebuild/Objects-normal/x86_64/A.o I'm using the Xcode default compiler on Mac OS X Snow Leopard. Thanks, helixed

    Read the article

  • Map One-To-One Relationship Doesn't Allow Inserting

    - by nfplee
    Hi, I'm trying to setup a one-to-one mapping from my Users to the UserDetails table. Say I have the following tables in my database: Users: - UserID (PK, Identity) - UserName - Password UsersDetails: - UserID (PK, FK) - FirstName - LastName I have created the following poco classes: public class User { public virtual int UserID { get; set; } public virtual string UserName { get; set; } public virtual string Password { get; set; } public virtual UserDetails Details { get; set; } } public class UserDetails { public virtual int UserID { get; set; } public virtual User User { get; set; } public virtual string FirstName { get; set; } public virtual string LastName { get; set; } public UserDetails() { } public UserDetails(User user) { User = user; } } Which are fluently mapped (please note the xml mapping is very similar and if all you know is the xml mapping then I would still appreciate you guidance): public class UserMap : ClassMap<User> { public UserMap() { Table("Users"); Id(x => x.UserID); Map(x => x.UserName); Map(x => x.Password); HasOne(x => x.Details) .Constrained() .Cascade.All(); } } public class UserDetailsMap : ClassMap<UserDetails> { public UserDetailsMap() { Table("UsersDetails"); Id(x => x.UserID) .GeneratedBy.Foreign("User"); HasOne(x => x.User) .Constrained(); Map(x => x.FirstName); Map(x => x.LastName); } } Everything displays correctly but if I say: var user = new User() { UserName = "Test", Password = "Test" }; user.Details = new UserDetails(user) { FirstName = "Test", LastName = "Test" }; session.Save(user); I get the error: "NHibernate.Id.IdentifierGenerationException: null id generated for: UserDetails." I'd really appreciate it if someone could show me what I've done wrong. Thanks

    Read the article

  • Regex Pattern for ignoring a custom escape character

    - by user1517464
    I am trying to find a suitable regex for matching pair of custom characters in an input string. These custom characters are replaced by their corresponding html tags. For e.g. The input string can have underscores in pairs to indicate words in bold. Hence, _Name_ outputs as <b>Name</b> However if there is a genuine underscore in the string, it cannot be replaced by "bold" tags and has to be ignored. The genuine underscore has to be preceded by / (I couldn't find a better character, it could be one more underscore or hyphen or whatever). Any single or paired occurrance of this genuine underscore has to be ignored by regex. So far I could come up with this regex: var pattern = @"(?!/)_(.*?)(?!/)_"; But it fails in below input string: _Tom_Katy/_Richard/_/_Stephan_and many users It outputs as <b>Tom</b>Katy/<b>Richard/_/</b>Stephan_and many users Many Thanks in Advance, Pr

    Read the article

  • inserting null values in datetime column and integer column

    - by reggie
    I am using C# and developing a winform application. I have a project class which has the project attributes. the constructor of the project class is as follows: newProject = new Project(GCD_ID.IsNull() ? (int?)null : Convert.ToInt32(GCD_ID), txt_Proj_Desc.Text, txt_Prop_Name.Text, ST.ID.ToString().IsNull() ? null: ST.ID.ToString(), cmbCentre.Text, SEC.ID.ToString().IsNull() ? null : SEC.ID.ToString(), cmbZone.Text, FD.ID.ToString().IsNull() ? null : FD.ID.ToString(), DT.ID.ToString().IsNull() ? null : DT.ID.ToString(), OP.ID.ToString().IsNull() ? null : OP.ID.ToString(), T.ID.ToString().IsNull() ? null : T.ID.ToString(), CKV.ID.ToString().IsNull() ? null : CKV.ID.ToString(), STAT.ID.ToString().IsNull() ? null : STAT.ID.ToString(), MW.IsNull() ? (Double?)null : Convert.ToDouble(MW), txt_Subject.Text, Ip_Num.IsNull() ? (int?)null : Convert.ToInt32(Ip_Num), H1N_ID.IsNull() ? (int?)null : Convert.ToInt32(H1N_ID), NOMS_Slip_Num.IsNull() ? (int?)null : Convert.ToInt32(NOMS_Slip_Num), NMS_Updated.IsNull() ? (DateTime?)null : Convert.ToDateTime(NMS_Updated), Received_Date.IsNull() ? (DateTime?)null : Convert.ToDateTime(Received_Date), Actual_IS_Date.IsNull() ? (DateTime?)null : Convert.ToDateTime(Actual_IS_Date), Scheduled_IS_Date.IsNull() ? (DateTime?)null : Convert.ToDateTime(Scheduled_IS_Date), UpSt.ID.ToString().IsNull() ? null : UpSt.ID.ToString(), UpFd.ID.ToString().IsNull() ? null : UpFd.ID.ToString(), txtHVCircuit.Text, cmbbxSIA.Text); My problem is that i cannot insert values into the database when the datetime variables and the integer variables are null. all this data are assigned to the variable from textboxes on the form.. bELOW is the database function which takes in all the variables and insert them into the database. public static void createNewProject(int? GCD_ID, string Project_Desc, string Proponent_Name, int Station_ID, string OpCentre, int Sector_ID, string PLZone, int Feeder, int DxTx_ID, int OpControl_ID, int Type_ID, int ConnKV_ID, int Status_ID, double? MW, string Subject, int? Ip_Num, int? H1N_ID, int? NOMS_Slip_Num, DateTime? NMS_Updated, DateTime? Received_Date, DateTime? Actual_IS_Date, DateTime? Scheduled_IS_Date, int UP_Station_ID, int UP_Feeder_ID, string @HV_Circuit, string SIA_Required) { SqlConnection conn = null; try { //Takes in all the employee details to be added to the database. conn = new SqlConnection(databaseConnectionString); conn.Open(); SqlCommand cmd = new SqlCommand("createNewProject", conn); cmd.CommandType = CommandType.StoredProcedure; cmd.Parameters.Add(new SqlParameter("GCD_ID", GCD_ID)); cmd.Parameters.Add(new SqlParameter("Project_Desc", MiscFunctions.Capitalize(Project_Desc))); cmd.Parameters.Add(new SqlParameter("Proponent_Name", MiscFunctions.Capitalize(Proponent_Name))); cmd.Parameters.Add(new SqlParameter("Station_ID", Station_ID)); cmd.Parameters.Add(new SqlParameter("OpCentre", OpCentre)); cmd.Parameters.Add(new SqlParameter("Sector_ID", Sector_ID)); cmd.Parameters.Add(new SqlParameter("PLZone", PLZone)); cmd.Parameters.Add(new SqlParameter("Feeder", Feeder)); cmd.Parameters.Add(new SqlParameter("DxTx_ID", DxTx_ID)); cmd.Parameters.Add(new SqlParameter("OpControl_ID", OpControl_ID)); cmd.Parameters.Add(new SqlParameter("Type_ID", Type_ID)); cmd.Parameters.Add(new SqlParameter("ConnKV_ID", ConnKV_ID)); cmd.Parameters.Add(new SqlParameter("Status_ID", Status_ID)); cmd.Parameters.Add(new SqlParameter("MW", MW)); cmd.Parameters.Add(new SqlParameter("Subject", Subject)); cmd.Parameters.Add(new SqlParameter("Ip_Num", Ip_Num)); cmd.Parameters.Add(new SqlParameter("H1N_ID", H1N_ID)); cmd.Parameters.Add(new SqlParameter("NOMS_Slip_Num", NOMS_Slip_Num)); cmd.Parameters.Add(new SqlParameter("NMS_Updated", NMS_Updated)); cmd.Parameters.Add(new SqlParameter("Received_Date", Received_Date)); cmd.Parameters.Add(new SqlParameter("Actual_IS_Date", Actual_IS_Date)); cmd.Parameters.Add(new SqlParameter("Scheduled_IS_Date", Scheduled_IS_Date)); cmd.Parameters.Add(new SqlParameter("UP_Station_ID", UP_Station_ID)); cmd.Parameters.Add(new SqlParameter("UP_Feeder_ID", UP_Feeder_ID)); cmd.Parameters.Add(new SqlParameter("HV_Circuit", HV_Circuit)); cmd.Parameters.Add(new SqlParameter("SIA_Required", SIA_Required)); cmd.ExecuteNonQuery(); } catch (Exception e) //returns if error incurred. { MessageBox.Show("Error occured in createNewProject" + Environment.NewLine + e.ToString()); } finally { if (conn != null) { conn.Close(); } } } My question is, how do i insert values into the database. please please help

    Read the article

  • Issue while creating an android project with phonegap

    - by Mohit Jain
    When I try to create native android project in eclipse it works perfectly fine, and that shows my android setup is proper but when I try to create a phonegap project it create a error ie: ./create ~/Documents/workspace/HelloWorld com.fizzysoftware.HelloWorld HelloWorld BUILD FAILED /Users/mohit/Documents/eclipse/android-sdk-macosx/tools/ant/build.xml:710: The following error occurred while executing this line: /Users/mohit/Documents/eclipse/android-sdk-macosx/tools/ant/build.xml:723: Compile failed; see the compiler error output for details. Total time: 5 seconds An unexpected error occurred: ant jar > /dev/null exited with 1 Deleting project... cordova version: 2.7 Android api version 14 Ps: I am a ruby on rails developer. This is my day 1 with phonegap/android/ios

    Read the article

  • Drupal using views with CCK custom fields

    - by jackbot
    I've got a Drupal site which uses a custom field for a certain type of node (person_id) which corresponds to a particular user. I want to create a view so that when logged in, a user can see a list of nodes 'tagged' with their person_id. I've got the view working fine, with a url of my-library/username but replacing username with a different username shows a list of all nodes tagged with that user. What I want to do is stop users changing the URL and seeing other users' tagged nodes. How can I do this? Is there somewhere where I can dictate that the only valid argument for this page is the one that corresponds with the current logged in user's username?

    Read the article

  • The SSIS tuning tip that everyone misses

    - by Rob Farley
    I know that everyone misses this, because I’m yet to find someone who doesn’t have a bit of an epiphany when I describe this. When tuning Data Flows in SQL Server Integration Services, people see the Data Flow as moving from the Source to the Destination, passing through a number of transformations. What people don’t consider is the Source, getting the data out of a database. Remember, the source of data for your Data Flow is not your Source Component. It’s wherever the data is, within your database, probably on a disk somewhere. You need to tune your query to optimise it for SSIS, and this is what most people fail to do. I’m not suggesting that people don’t tune their queries – there’s plenty of information out there about making sure that your queries run as fast as possible. But for SSIS, it’s not about how fast your query runs. Let me say that again, but in bolder text: The speed of an SSIS Source is not about how fast your query runs. If your query is used in a Source component for SSIS, the thing that matters is how fast it starts returning data. In particular, those first 10,000 rows to populate that first buffer, ready to pass down the rest of the transformations on its way to the Destination. Let’s look at a very simple query as an example, using the AdventureWorks database: We’re picking the different Weight values out of the Product table, and it’s doing this by scanning the table and doing a Sort. It’s a Distinct Sort, which means that the duplicates are discarded. It'll be no surprise to see that the data produced is sorted. Obvious, I know, but I'm making a comparison to what I'll do later. Before I explain the problem here, let me jump back into the SSIS world... If you’ve investigated how to tune an SSIS flow, then you’ll know that some SSIS Data Flow Transformations are known to be Blocking, some are Partially Blocking, and some are simply Row transformations. Take the SSIS Sort transformation, for example. I’m using a larger data set for this, because my small list of Weights won’t demonstrate it well enough. Seven buffers of data came out of the source, but none of them could be pushed past the Sort operator, just in case the last buffer contained the data that would be sorted into the first buffer. This is a blocking operation. Back in the land of T-SQL, we consider our Distinct Sort operator. It’s also blocking. It won’t let data through until it’s seen all of it. If you weren’t okay with blocking operations in SSIS, why would you be happy with them in an execution plan? The source of your data is not your OLE DB Source. Remember this. The source of your data is the NCIX/CIX/Heap from which it’s being pulled. Picture it like this... the data flowing from the Clustered Index, through the Distinct Sort operator, into the SELECT operator, where a series of SSIS Buffers are populated, flowing (as they get full) down through the SSIS transformations. Alright, I know that I’m taking some liberties here, because the two queries aren’t the same, but consider the visual. The data is flowing from your disk and through your execution plan before it reaches SSIS, so you could easily find that a blocking operation in your plan is just as painful as a blocking operation in your SSIS Data Flow. Luckily, T-SQL gives us a brilliant query hint to help avoid this. OPTION (FAST 10000) This hint means that it will choose a query which will optimise for the first 10,000 rows – the default SSIS buffer size. And the effect can be quite significant. First let’s consider a simple example, then we’ll look at a larger one. Consider our weights. We don’t have 10,000, so I’m going to use OPTION (FAST 1) instead. You’ll notice that the query is more expensive, using a Flow Distinct operator instead of the Distinct Sort. This operator is consuming 84% of the query, instead of the 59% we saw from the Distinct Sort. But the first row could be returned quicker – a Flow Distinct operator is non-blocking. The data here isn’t sorted, of course. It’s in the same order that it came out of the index, just with duplicates removed. As soon as a Flow Distinct sees a value that it hasn’t come across before, it pushes it out to the operator on its left. It still has to maintain the list of what it’s seen so far, but by handling it one row at a time, it can push rows through quicker. Overall, it’s a lot more work than the Distinct Sort, but if the priority is the first few rows, then perhaps that’s exactly what we want. The Query Optimizer seems to do this by optimising the query as if there were only one row coming through: This 1 row estimation is caused by the Query Optimizer imagining the SELECT operation saying “Give me one row” first, and this message being passed all the way along. The request might not make it all the way back to the source, but in my simple example, it does. I hope this simple example has helped you understand the significance of the blocking operator. Now I’m going to show you an example on a much larger data set. This data was fetching about 780,000 rows, and these are the Estimated Plans. The data needed to be Sorted, to support further SSIS operations that needed that. First, without the hint. ...and now with OPTION (FAST 10000): A very different plan, I’m sure you’ll agree. In case you’re curious, those arrows in the top one are 780,000 rows in size. In the second, they’re estimated to be 10,000, although the Actual figures end up being 780,000. The top one definitely runs faster. It finished several times faster than the second one. With the amount of data being considered, these numbers were in minutes. Look at the second one – it’s doing Nested Loops, across 780,000 rows! That’s not generally recommended at all. That’s “Go and make yourself a coffee” time. In this case, it was about six or seven minutes. The faster one finished in about a minute. But in SSIS-land, things are different. The particular data flow that was consuming this data was significant. It was being pumped into a Script Component to process each row based on previous rows, creating about a dozen different flows. The data flow would take roughly ten minutes to run – ten minutes from when the data first appeared. The query that completes faster – chosen by the Query Optimizer with no hints, based on accurate statistics (rather than pretending the numbers are smaller) – would take a minute to start getting the data into SSIS, at which point the ten-minute flow would start, taking eleven minutes to complete. The query that took longer – chosen by the Query Optimizer pretending it only wanted the first 10,000 rows – would take only ten seconds to fill the first buffer. Despite the fact that it might have taken the database another six or seven minutes to get the data out, SSIS didn’t care. Every time it wanted the next buffer of data, it was already available, and the whole process finished in about ten minutes and ten seconds. When debugging SSIS, you run the package, and sit there waiting to see the Debug information start appearing. You look for the numbers on the data flow, and seeing operators going Yellow and Green. Without the hint, I’d sit there for a minute. With the hint, just ten seconds. You can imagine which one I preferred. By adding this hint, it felt like a magic wand had been waved across the query, to make it run several times faster. It wasn’t the case at all – but it felt like it to SSIS.

    Read the article

  • Comparison of SSL Certificates

    - by Walley
    My web application most definately needs an SSL certificate I was looking into godaddy's: http://www.godaddy.com/ssl/ssl-certificates.aspx The standard doesn't appear to have the lock in the URL bar, which a lot of users might not consider secure... How true is this? The Standard has https:// obviously in the bar, but is that enough to persuade users to want to enter in confidential information?? I'd appreciate any experience anyone has had with this. or any alterates they've dealt with. Is $100/year really the going rate for an SSL cert that has the lock in the url bar? Thanks SO!

    Read the article

  • [Newbie] How to join mysql tables

    - by Ivan
    I've an old table like this: user> id | name | address | comments And now I've to create an "alias" table to allow some users to have an alias name for some reasons. I've created a new table 'user_alias' like this: user_alias> name | user But now I have a problem due my poor SQL level... How to join both tables to generate something like this: 1 | my_name | my_address | my_comments 1 | my_alias | my_address | my_comments 2 | other_name | other_address | other_comments I mean, I want to make a "SELECT..." query that returns in the same format as the "user" table ALL users and ALL alias.. Something like this: SELECT user.* FROM user LEFT JOIN user_alias ON `user`=`id` but it doesn't work for me..

    Read the article

  • Odd "Object reference not set to an instance of an object" involving xWinForms

    - by Kyle
    Hey, I've been trying to get the xWinForms 3.0 library (a library with forms support in xna) working with my C# XNA Game project but I keep getting the same problem. I add the reference to my project, put in the using statement, declare a formCollection variable and then I try to initialize it. whenever I run the project I get stopped on this line: formCollection = new FormCollection(this.Window, Services, ref graphics); it gives me the error: " System.NullReferenceException was unhandled Message="Object reference not set to an instance of an object." Source="Microsoft.Xna.Framework" StackTrace: at Microsoft.Xna.Framework.Graphics.VertexShader..ctor(GraphicsDevice graphicsDevice, Byte[] shaderCode) at Microsoft.Xna.Framework.Graphics.SpriteBatch.ConstructPlatformData() at Microsoft.Xna.Framework.Graphics.SpriteBatch..ctor(GraphicsDevice graphicsDevice) at xWinFormsLib.FormCollection..ctor(GameWindow window, IServiceProvider services, GraphicsDeviceManager& graphics) at GameSolution.Game2.LoadContent() in C:\Users\Owner\Documents\School\Year 3\Winter\Soen 390\TeamWTF_3\SourceCode\GameSolution\GameSolution\Game2.cs:line 45 at Microsoft.Xna.Framework.Game.Initialize() at GameSolution.Game2.Initialize() in C:\Users\Owner\Documents\School\Year 3\Winter\Soen 390\TeamWTF_3\SourceCode\GameSolution\GameSolution\Game2.cs:line 37 at Microsoft.Xna.Framework.Game.Run() at GameSolution.Program.Main(String[] args) in C:\Users\Owner\Documents\School\Year 3\Winter\Soen 390\TeamWTF_3\SourceCode\GameSolution\GameSolution\Program.cs:line 14 InnerException: " In a project I downloaded that used the xWinForms, I put the following code in and it compiled and ran no error. but when I put it in my project I get the error. Am I making some stupid mistake about including dlls or something? I've been at this for hours and I can't seem to find anything that would cause this. using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Audio; using Microsoft.Xna.Framework.Content; using Microsoft.Xna.Framework.GamerServices; using Microsoft.Xna.Framework.Graphics; using Microsoft.Xna.Framework.Input; using Microsoft.Xna.Framework.Media; using Microsoft.Xna.Framework.Net; using Microsoft.Xna.Framework.Storage; using xWinFormsLib; namespace GameSolution { public class Game2 : Microsoft.Xna.Framework.Game { GraphicsDeviceManager graphics; SpriteBatch spriteBatch; FormCollection formCollection; public Game2() { graphics = new GraphicsDeviceManager(this); Content.RootDirectory = "Content"; } protected override void Initialize() { // TODO: Add your initialization logic here base.Initialize(); } protected override void LoadContent() { // Create a new SpriteBatch, which can be used to draw textures. spriteBatch = new SpriteBatch(GraphicsDevice); formCollection = new FormCollection(this.Window, Services, ref graphics); } protected override void Update(GameTime gameTime) { base.Update(gameTime); } protected override void Draw(GameTime gameTime) { base.Draw(gameTime); } } } Any help would be greatly appreciated ._.

    Read the article

  • YouTube API Get all videos uploaded feed

    - by Paul
    Hi Guys, I can't seem to retrieve ALL videos from a particular channel on YouTube, despite the API giving example code that should perform just that. I'm using Java. http://gdata.youtube.com/feeds/api/users/GoogleDevelopers/uploads The above rss feed is the URL they suggest using along with the following sample code.. /*init the list*/ String feedUrl = "http://gdata.youtube.com/feeds/api/users/GoogleDevelopers/uploads"; VideoFeed videoFeeder = null; videoFeeder = serviceobject.getFeed(new URL(feedUrl), VideoFeed.class); Looping this with a for loop suggests 25 entries (as per the RSS). However - the actual number of videos uploaded is significantly larger. (662 at time of writing). My query is how on earth you retrieve everything with the API, not just a subset of the data. Any ideas on where I'm going wrong? Should I be using a different URL? http://www.youtube.com/GoogleDevelopers#g/a

    Read the article

  • Adding Parsekit To An Xcode Project

    - by Garry
    I am trying to add the Parsekit framework to my OSX Xcode project. I've never added a 3rd party framework before and I can't get it to work right. I dragged the included Xcode project into my 'Groups & Files' pane and chose to Add it to my project. I then dragged Parsekit.framework underneath the Link Binary With Libraries heading. Then I double-clicked my target app and added Parsekit as a Direct Dependency. I also added libicucore.dylib as a Linked Library (as it says to do this on their site). Finally, in the Build settings tab of my target info I set the Header Search Paths to /Users/path/to/include/directory and the Other Linker Flags to -ObjC -all_load. Running this as a debug build work fine with no errors. However, when I build my app to release and then try to run the executable created, the app fails to load with the following error message: MyApp cannot be opened because of a problem. Check with the developer to make sure myApp works with this version of Mac OS X. Here is the dump from the crash reporter: Process: MyApp [11658] Path: /Users/Garry/Programming/Xcode/Mac/MyApp/build/Release/MyApp.app/Contents/MacOS/MyApp Identifier: com.yourcompany.MyApp Version: ??? (???) Code Type: X86-64 (Native) Parent Process: launchd [135] Date/Time: 2010-05-24 17:08:08.475 +0100 OS Version: Mac OS X 10.6.3 (10D573) Report Version: 6Interval Since Last Report: 133300 sec Crashes Since Last Report: 3 Per-App Crashes Since Last Report: 3 Anonymous UUID: DF0265E4-B5A0-45E1-8B71-D52A27CFDDCA Exception Type: EXC_BREAKPOINT (SIGTRAP) Exception Codes: 0x0000000000000002, 0x0000000000000000 Crashed Thread: 0 Dyld Error Message: Library not loaded: @executable_path/../Frameworks/ParseKit.framework/Versions/A/ParseKit Referenced from: /Users/Garry/Programming/Xcode/Mac/MyApp/build/Release/MyApp.app/Contents/MacOS/MyApp Reason: image not found Model: MacBookPro5,5, BootROM MBP55.00AC.B03, 2 processors, Intel Core 2 Duo, 2.53 GHz, 4 GB, SMC 1.47f2 Graphics: NVIDIA GeForce 9400M, NVIDIA GeForce 9400M, PCI, 256 MB Memory Module: global_name AirPort: spairport_wireless_card_type_airport_extreme (0x14E4, 0x8D), Broadcom BCM43xx 1.0 (5.10.91.27) Bluetooth: Version 2.3.1f4, 2 service, 2 devices, 1 incoming serial ports Network Service: AirPort, AirPort, en1 Network Service: Ethernet Adaptor (en6), Ethernet, en6 Serial ATA Device: Hitachi HTS545025B9SA02, 232.89 GB Serial ATA Device: HL-DT-ST DVDRW GS23N USB Device: Built-in iSight, 0x05ac (Apple Inc.), 0x8507, 0x24400000 USB Device: Internal Memory Card Reader, 0x05ac (Apple Inc.), 0x8403, 0x26500000 USB Device: IR Receiver, 0x05ac (Apple Inc.), 0x8242, 0x04500000 USB Device: Apple Internal Keyboard / Trackpad, 0x05ac (Apple Inc.), 0x0237, 0x04600000 USB Device: BRCM2046 Hub, 0x0a5c (Broadcom Corp.), 0x4500, 0x06100000 USB Device: Bluetooth USB Host Controller, 0x05ac (Apple Inc.), 0x8213, 0x06110000 After building the app, in addition to the executable file, Xcode is also creating a file called MyApp.app.dSYM. Any idea what that is?? I am developing with Xcode 3.2.2 on an Intel MBP running 10.6.3. Many thanks for any help offered.

    Read the article

  • rails: include statement with two ON conditions

    - by Markus
    Hi, I have tree tables books bookmarks users where there is a n to m relation from books to users trough bookmarks. Im looking for a query, where I get all the books of a certain user including the bookmarks. If no bookmarks are there, there should be a null included... my sql statement looks like: SELECT * FROM `books` LEFT OUTER JOIN `bookmarks ` ON bookmarks.book_id = books.id AND bookmarks.user_id = ? In rails I only know the :include statement, but how can I add the second bookmarks.user_id = ? statement in the ON section of this query? if I put it in the :conditions part, no null results would get returned! Thanks! Markus

    Read the article

  • Am I Leaking ADO.NET Connections?

    - by HardCode
    Here is an example of my code in a DAL. All calls to the database's Stored Procedures are structured this way, and there is no in-line SQL. Friend Shared Function Save(ByVal s As MyClass) As Boolean Dim cn As SqlClient.SqlConnection = Dal.Connections.MyAppConnection Dim cmd As New SqlClient.SqlCommand Try cmd.Connection = cn cmd.CommandType = CommandType.StoredProcedure cmd.CommandText = "proc_save_my_class" cmd.Parameters.AddWithValue("@param1", s.Foo) cmd.Parameters.AddWithValue("@param2", s.Bar) Return True Finally Dal.Utility.CleanupAdoObjects(cmd, cn) End Try End Function Here is the Connection factory (if I am using the correct term): Friend Shared Function MyAppConnection() As SqlClient.SqlConnection Dim cn As New SqlClient.SqlConnection(ConfigurationManager.ConnectionStrings("MyConnectionString").ToString) cn.Open() If cn.State <> ConnectionState.Open Then ' CriticalException is a custom object inheriting from Exception. Throw New CriticalException("Could not connect to the database.") Else Return cn End If End Function Here is the Dal.Utility.CleaupAdoObjects() function: Friend Shared Sub CleanupAdoObjects(ByVal cmd As SqlCommand, ByVal cn As SqlConnection) If cmd IsNot Nothing Then cmd.Dispose() If cn IsNot Nothing AndAlso cn.State <> ConnectionState.Closed Then cn.Close() End Sub I am getting a lot of "Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding." error messages reported by the users. The application's DAL opens a connection, reads or saves data, and closes it. No connections are ever left open - intentionally! There is nothing obvious on the Windows 2000 Server hosting the SQL Server 2000 that would indicate a problem. Nothing in the Event Logs and nothing in the SQL Server logs. The timeouts happen randomly - I cannot reproduce. It happens early in the day with only 1 to 5 users in the system. It also happens with around 50 users in the system. The most connections to SQL Server via Performance Monitor, for all databases, has been about 74. The timeouts happen in code that both saves to, and reads from, the database in different parts of the application. The stack trace does not point to one or two offending DAL functions. It's happened in many different places. Does my ADO.NET code appear to be able to leak connections? I've goolged around a bit, and I've read that if the connection pool fills up, this can happen. However, I'm not explicitly setting any connection pooling. I've even tried to increase the Connection Timeout in the connection string, but timeouts happen long before the 300 second (5 minute) value: <add name="MyConnectionString" connectionString="Data Source=MyServer;Initial Catalog=MyDatabase;Integrated Security=SSPI;Connection Timeout=300;"/> I'm at a total loss already as to what is causing these Timeout issues. Any ideas are appreciated.

    Read the article

  • GWT MVP with a table

    - by Benju
    When working with MVP in GWT how would you work with a table? For example if you had a table of users does your view look like this? public interface MyDisplay{ HasValue<User> users(); } or would it be more like this? public interface MyDisplay{ HasValue<TableRow> rows(); } MVP makes a ton of sense until you start dealing with widgets that need to display lists of non-primitive data. Can anybody shed some light? This mailing list archive appears to ask the same question but never reaches a solid resolution... http://www.mail-archive.com/[email protected]/msg24546.html

    Read the article

  • How can I generate an Expression tree that queries an object with List<T> as a property?

    - by David Robbins
    Forgive my clumsy explanation, but I have a class that contains a List: public class Document { public int OwnerId { get; set; } public List<User> Users { get; set; } public Document() { } } public class User { public string UserName { get; set; } public string Department { get; set; } } Currently I use PredicateBuilder to perform dynmica queries on my objects. How can I turn the following LINQ statement into an Expression Tree: var predicate= PredicateBuilder.True<User>(); predicate= predicate.And<User>(user => user.Deparment == "HR"); var deptDocs = documents.AsQueryable() .Where(doc => doc.Users .AsQueryable().Count(predicate) > 0) .ToList(); In other words var deptDocs = documents.HasUserAttributes("Department", "HR").ToList();

    Read the article

  • distinct in Xpath ?

    - by Antoine
    Hi, I have this XML file, from which I'd like to count the number of users referenced in it. But they can appear in more than one category, and I'd like these duplicates not to be taken into account. In the example below, the query should return 3 and not 4. Is there a way in XPath to do so? Users are not sorted at all. <list> <group name='QA'> <user name='name1'>name1@email</user> <user name='name2'>name2@email</user> </group> <group name='DEV'> <user name='name3'>name3@email</user> <user name='name2'>name2@email</user> </group> </list>

    Read the article

  • How do I persist form data across an "access denied" page in Drupal?

    - by Michael T. Smith
    We're building a small sub-site that, on the front page, has a one input box form that users can submit. From there, they're taken to a page with a few more associated form fields (add more details, tag it, etc.) with the first main form field already filled in. This works splendidly, thus far. The problem comes for users that are not logged in. When they submit that first form, they're taken to a (LoginToboggan based) login page that allows them to login. After they login, they redirect to the second form page, but the first main form field isn't filled in -- in other words, the form data didn't persist. How can we store that data and have it persist across the access denied page?

    Read the article

  • Attachment problem in application

    - by ahmed
    hello, I got a situation over here. We have a .Net application running on intranet ServerA(database/application server). Now we have also implemented the application on a other machine ServerB (application server only,windows Server 2003, IIS 6.0) database is pointed towards serverA. Everything works fine but the problem is we have some web forms where we are allowing some users to attach some files like documents , pdf etc. But the Public users are facing problem with the attachments. Now I need to find solution for this. Any help will be appreciated Thanking you all in advance.

    Read the article

  • geographical deployment Vs geo load balancing SharePoint 2010

    - by vrajaraman
    we have a company wide SharePoint portals planned for few thousand users. since the users are distributed among different countries and their applications (hosted in sharepoint) We would like to consider geo deployment Vs geo load balancing. Please share your inputs. We are aware of this, Geo SharePoint Cluster facilitates - Farms at Central and other sites , db into regional. 2 db cluster - syncing using logshipping or SAN sync or SQL 2008 features like database mirroing Vs Loading balancing using URL and some 3rd party. all farm,sites,db centralised. benefits expecting. 1 High availability. 2.diaster recovering management. 3.maintenance hope i miss some of the points to be covered

    Read the article

  • WPF: isolated storage file path too long

    - by user342961
    Hi, I'm deploying my WPF app with ClickOnce. When developing locally in Visual Studio, I store files in the isolated storage by calling IsolatedStorageFile.GetUserStoreForDomain(). This works just fine and the generated path is C:\Users\Frederik\AppData\Local\IsolatedStorage\phqduaro.crw\hux3pljr.cnx\StrongName.kkulk3wafjkvclxpwvxmpvslqqwckuh0\Publisher.ui0lr4tpq53mz2v2c0uqx21xze0w22gq\Files\FilerefData\-581750116 (189 chars) But when I deploy my app with ClickOnce, the generated path becomes too long, resulting in a DirectoryNotFoundException when creating the isolated storage directory. The generated path with ClickOnce is: C:\Users\Frederik\AppData\Local\Apps\2.0\Data\OQ0LNXJT.R5V\8539ABHC.ODN\exqu..tion_e07264ceafd7486e_0001.0000_b8f01b38216164a0\Data\StrongName.wy0cojdd3mpvq45404l3gxdklugoanvi\Publisher.ui0lr4tpq53mz2v2c0uqx21xze0w22gq\Files\FilerefData\-581750116 (247 chars) When I browse the folders all but the last directory of the path exists. Then when trying to create a folder at this location windows tells me I can't create a directory because the resulting path name will be too long. How can I shorten the path generated by the IsolatedStorage?

    Read the article

  • Temporarily save Word document changes

    - by willvv
    Hi I'm using Interop to create a "custom word editor", basically I've incorporated MS Word on my application and I allow users to edit a document and insert custom fields from a predefined list. I provide a "Preview" option to see how the document will appear when the data is added. The users can start editing a template on my application, and at any time they can hit preview and the preview should appear with the latest changes. I want the user to be able to preview the document without saving the changes, the problem is that when I invoke SaveAs() on the document (to create a temp file that I can use as the input for the preview generator), the editor opens the temp document. Is there a way to save a copy of the document being edited but keep the original (open) document with its changes unsaved? Thanks a lot

    Read the article

< Previous Page | 736 737 738 739 740 741 742 743 744 745 746 747  | Next Page >