Search Results

Search found 32789 results on 1312 pages for 'object relational mapping'.

Page 531/1312 | < Previous Page | 527 528 529 530 531 532 533 534 535 536 537 538  | Next Page >

  • How do I initialize a Scala map with more than 4 initial elements in Java?

    - by GlenPeterson
    For 4 or fewer elements, something like this works (or at least compiles): import scala.collection.immutable.Map; Map<String,String> HAI_MAP = new Map4<>("Hello", "World", "Happy", "Birthday", "Merry", "XMas", "Bye", "For Now"); For a 5th element I could do this: Map<String,String> b = HAI_MAP.$plus(new Tuple2<>("Later", "Aligator")); But I want to know how to initialize an immutable map with 5 or more elements and I'm flailing in Type-hell. Partial Solution I thought I'd figure this out quickly by compiling what I wanted in Scala, then decompiling the resultant class files. Here's the scala: object JavaMapTest { def main(args: Array[String]) = { val HAI_MAP = Map(("Hello", "World"), ("Happy", "Birthday"), ("Merry", "XMas"), ("Bye", "For Now"), ("Later", "Aligator")) println("My map is: " + HAI_MAP) } } But the decompiler gave me something that has two periods in a row and thus won't compile (I don't think this is valid Java): scala.collection.immutable.Map HAI_MAP = (scala.collection.immutable.Map) scala.Predef..MODULE$.Map().apply(scala.Predef..MODULE$.wrapRefArray( scala.Predef.wrapRefArray( (Object[])new Tuple2[] { new Tuple2("Hello", "World"), new Tuple2("Happy", "Birthday"), new Tuple2("Merry", "XMas"), new Tuple2("Bye", "For Now"), new Tuple2("Later", "Aligator") })); I'm really baffled by the two periods in this: scala.Predef..MODULE$ I asked about it on #java on Freenode and they said the .. looked like a decompiler bug. It doesn't seem to want to compile, so I think they are probably right. I'm running into it when I try to browse interfaces in IntelliJ and am just generally lost. Based on my experimentation, the following is valid: Tuple2[] x = new Tuple2[] { new Tuple2<String,String>("Hello", "World"), new Tuple2<String,String>("Happy", "Birthday"), new Tuple2<String,String>("Merry", "XMas"), new Tuple2<String,String>("Bye", "For Now"), new Tuple2<String,String>("Later", "Aligator") }; scala.collection.mutable.WrappedArray<Tuple2> y = scala.Predef.wrapRefArray(x); There is even a WrappedArray.toMap() method but the types of the signature are complicated and I'm running into the double-period problem there too when I try to research the interfaces from Java.

    Read the article

  • Dividing up spritesheet in Javascript

    - by hustlerinc
    I would like to implement an object for my spritesheets in Javascript. I'm very new to this language and game-developement so I dont really know how to do it. My guess is I set spritesize to 16, use that to divide as many times as it fits on the spritesheet and store this value as "spritesheet". Then a for(i=0;i<spritesheet.length;i++) loop running over the coordinates. Then tile = new Image(); and tile.src = spritesheet[i] to store the individual sprites based on their coordinates on the spritesheet. My problem is how could I loop trough the spritesheet and make an array of that? The result should be similar to: var tile = Array( "img/ground.png", "img/crate.png" ); If possible this would be done with one single object that i only access once, and the tile array would be stored for later reference. I couldn't find anything similar searching for "javascript spritesheet". Edit: I made a small prototype of what I'm after: function Sprite(){ this.size = 16; this.spritesheet = new Image(); this.spritesheet.src = 'img/spritesheet.png'; this.countX = this.spritesheet.width / 16; this.countY = this.spritesheet.height / 16; this.spriteCount = this.countX * this.countY; this.divide = function(){ for(i=0;i<this.spriteCount;i++){ // define spritesheet coordinates and store as tile[i] } } } Am I on the right track?

    Read the article

  • Issue While uploading a image to share point 2010 picture library.

    - by Gino Abraham
    I was trying to upload a image to my picture library using sharepoint client object model. I Used the code from the below blog to upload a file to my picture library. http://blogs.msdn.com/b/sridhara/archive/2010/03/12/uploading-files-using-client-object-model-in-sharepoint-2010.aspx The image got uploaded sucessfully. But when we took the relative url to update in a different list, we were getting empty image symbol. After a lot of analysis we figured out that the issue was with the file we uploaded. An image file which is of jpeg quality was uploaded to an application with giff extension. Try this. Copy a JPG file from net and save it to your file system. Change the extension of the file from jpg to giff. When you change the file extension the image quality remains same but it will open in picture viewers. Upload the file to your picture library. Once uploaded you will get the file listed as thumbail in your picture library. Click on the thumbnail image it will open up a page showing a larger image with file details. Now click either on the image or the file name hyper link, it will open up an empty page with default no image symbol. I wasted a lot of time on this figuring out the issue, so thought of sharing here. Hope this helps some one.

    Read the article

  • ASP.NET Session Management

    - by geekrutherford
    Great article (a little old but still relevant) about the inner workings of session management in ASP.NET: Underpinnings of the Session State Management Implementation in ASP.NET.   Using StateServer and the BinaryFormatter serialization occuring caused me quite the headache over the last few days. Curiously, it appears the w3wp.exe process actually consumes more memory when utilizing StateServer and storing somewhat large and complex data types in session.   Users began experiencing Out Of Memory exceptions in the production environment. Looking at the stack trace it related to serialization using the BinaryFormatter. Using remote debugging against our QA server I noted that the code in the application functioned without issue. The exception occured outside the context of the application itself when the request had completed and the web server was trying to serialize session state into the StateServer.   The short term solution is switching back to the InProc method. Thus far this has proven to consume considerably less memory and has caused no issues. Long term the complex object stored in session will be off-loaded into a web service used to access the information directly from the database outside the context of the object used to encapsulate it.

    Read the article

  • Question about separating game core engine from game graphics engine...

    - by Conrad Clark
    Suppose I have a SquareObject class, which implements IDrawable, an interface which contains the method void Draw(). I want to separate drawing logic itself from the game core engine. My main idea is to create a static class which is responsible to dispatch actions to the graphic engine. public static class DrawDispatcher<T> { private static Action<T> DrawAction = new Action<T>((ObjectToDraw)=>{}); public static void SetDrawAction(Action<T> action) { DrawAction = action; } public static void Dispatch(this T Obj) { DrawAction(Obj); } } public static class Extensions { public static void DispatchDraw<T>(this object Obj) { DrawDispatcher<T>.DispatchDraw((T)Obj); } } Then, on the core side: public class SquareObject: GameObject, IDrawable { #region Interface public void Draw() { this.DispatchDraw<SquareObject>(); } #endregion } And on the graphics side: public static class SquareRender{ //stuff here public static void Initialize(){ DrawDispatcher<SquareObject>.SetDrawAction((Square)=>{//my square rendering logic}); } } Do this "pattern" follow best practices? And a plus, I could easily change the render scheme of each object by changing the DispatchDraw parameter, as in: public class SuperSquareObject: GameObject, IDrawable { #region Interface public void Draw() { this.DispatchDraw<SquareObject>(); } #endregion } public class RedSquareObject: GameObject, IDrawable { #region Interface public void Draw() { this.DispatchDraw<RedSquareObject>(); } #endregion } RedSquareObject would have its own render method, but SuperSquareObject would render as a normal SquareObject I'm just asking because i do not want to reinvent the wheel, and there may be a design pattern similar (and better) to this that I may be not acknowledged of. Thanks in advance!

    Read the article

  • How to Change System Application Pages (Like AccessDenied.aspx, Signout.aspx etc)

    - by Jayant Sharma
    An advantage of SharePoint 2010 over SharePoint 2007 is, we can programatically change the URL of System Application Pages. For Example, It was not very easy to change the URL of AccessDenied.aspx page in SharePoint 2007 but in SharePoint 2010 we can easily change the URL with just few lines of code. For this purpose we have two methods available: GetMappedPage UpdateMappedPage: returns true if the custom application page is successfully mapped; otherwise, false. You can override following pages. Member name Description None AccessDenied Specifies AccessDenied.aspx. Confirmation Specifies Confirmation.aspx. Error Specifies Error.aspx. Login Specifies Login.aspx. RequestAccess Specifies ReqAcc.aspx. Signout Specifies SignOut.aspx. WebDeleted Specifies WebDeleted.aspx. ( http://msdn.microsoft.com/en-us/library/microsoft.sharepoint.administration.spwebapplication.spcustompage.aspx ) So now Its time to implementation, using (SPSite site = new SPSite(http://testserver01)) {   //Get a reference to the web application.   SPWebApplication webApp = site.WebApplication;   webApp.UpdateMappedPage(SPWebApplication.SPCustomPage.AccessDenied, "/_layouts/customPages/CustomAccessDenied.aspx");   webApp.Update(); } Similarly, you can use  SPCustomPage.Confirmation, SPCustomPage.Error, SPCustomPage.Login, SPCustomPage.RequestAccess, SPCustomPage.Signout and SPCustomPage.WebDeleted to override these pages. To  reset the mapping, set the Target value to Null like webApp.UpdateMappedPage(SPWebApplication.SPCustomPage.AccessDenied, null);webApp.Update();One restricted to location in the /_layouts folder. When updating the mapped page, the URL has to start with “/_layouts/”. Ref: http://msdn.microsoft.com/en-us/library/gg512103.aspx#bk_spcustapp http://msdn.microsoft.com/en-us/library/microsoft.sharepoint.administration.spwebapplication.updatemappedpage.aspx

    Read the article

  • How can I save state from script in a multithreaded engine?

    - by Peter Ren
    We are building a multithreaded game engine and we've encountered some problems as described below. The engine have 3 threads in total: script, render, and audio. Each frame, we update these 3 threads simultaneously. As these threads updating themselves, they produce some tasks and put them into a public storage area. As all the threads finish their update, each thread go and copy the tasks for themselves one by one. After all the threads finish their task copying, we make the threads process those tasks and update these threads simultaneously as described before. So this is the general idea of the task schedule part of our engine. Ok, well, all the task schedule part work well, but here's the problem: For the simplest, I'll take Camera as an example: local oldPos = camera:getPosition() -- ( 0, 0, 0 ) camera:setPosition( 1, 1, 1 ) -- Won't work now, cuz the render thread will process the task at the beginning of the next frame local newPos = camera:getPosition() -- Still ( 0, 0, 0 ) So that's the problem: If you intend to change a property of an object in another thread, you have to wait until that thread process this property-changing message. As a result, what you get from the object is still the information in the last frame. So, is there a way to solve this problem? Or are we build the task schedule part in a wrong way? Thanks for your answers :)

    Read the article

  • How to handle multiple effect files in XNA

    - by Adam 'Pi' Burch
    So I'm using ModelMesh and it's built in Effects parameter to draw a mesh with some shaders I'm playing with. I have a simple GUI that lets me change these parameters to my heart's desire. My question is, how do I handle shaders that have unique parameters? For example, I want a 'shiny' parameter that affects shaders with Phong-type specular components, but for an environment mapping shader such a parameter doesn't make a lot of sense. How I have it right now is that every time I call the ModelMesh's Draw() function, I set all the Effect parameters as so foreach (ModelMesh m in model.Meshes) { if (isDrawBunny == true)//Slightly change the way the world matrix is calculated if using the bunny object, since it is not quite centered in object space { world = boneTransforms[m.ParentBone.Index] * Matrix.CreateScale(scale) * rotation * Matrix.CreateTranslation(position + bunnyPositionTransform); } else //If not rendering the bunny, draw normally { world = boneTransforms[m.ParentBone.Index] * Matrix.CreateScale(scale) * rotation * Matrix.CreateTranslation(position); } foreach (Effect e in m.Effects) { Matrix ViewProjection = camera.ViewMatrix * camera.ProjectionMatrix; e.Parameters["ViewProjection"].SetValue(ViewProjection); e.Parameters["World"].SetValue(world); e.Parameters["diffuseLightPosition"].SetValue(lightPositionW); e.Parameters["CameraPosition"].SetValue(camera.Position); e.Parameters["LightColor"].SetValue(lightColor); e.Parameters["MaterialColor"].SetValue(materialColor); e.Parameters["shininess"].SetValue(shininess); //e.Parameters //e.Parameters["normal"] } m.Draw(); Note the prescience of the example! The solutions I've thought of involve preloading all the shaders, and updating the unique parameters as needed. So my question is, is there a best practice I'm missing here? Is there a way to pull the parameters a given Effect needs from that Effect? Thank you all for your time!

    Read the article

  • Would it be a good idea to work on letting people add arrays of numbers in javascript?

    - by OneThreeSeven
    I am a very mathematically oriented programmer, and I happen to be doing a lot of java script these days. I am really disappointed in the math aspects of javascript: the Math object is almost a joke because it has so few methods you can't use ^ for exponentiation the + operator is very limited, you cant add array's of numbers or do scalar multiplication on arrays Now I have written some pretty basic extensions to the Math object and have considered writing a library of advanced Math features, amazingly there doesn't seem to be any sort of standard library already out even for calculus, although there is one for vectors and matricies I was able find. The notation for working with vectors and matricies is really bad when you can't use the + operator on arrays, and you cant do scalar multiplication. For example, here is a hideous expression for subtracting two vectors, A - B: Math.vectorAddition(A,Math.scalarMultiplication(-1,B)); I have been looking for some kind of open-source project to contribute to for awhile, and even though my C++ is a bit rusty I would very much like to get into the code for V8 engine and extend the + operator to work on arrays, to get scalar multiplication to work, and possibly to get the ^ operator to work for exponentiation. These things would greatly enhance the utility of any mathematical javascript framework. I really don't know how to get involved in something like the V8 engine other than download the code and start working on it. Of course I'm afraid that since V8 is chrome specific, that without browser cross-compatibility a fundamental change of this type is likely to be rejected for V8. I was hoping someone could either tell me why this is a bad idea, or else give me some pointers about how to proceed at this point to get some kind of approval to add these features. Thanks!

    Read the article

  • What is the best type of c# timer to use with an Unity game that uses many timers simultaneously?

    - by Kyle Seidlitz
    I am developing a stand-alone 3d game in Unity that will have anywhere from 1 to 200 timers running simultaneously. For this game timer durations will range from 5 minutes to 4 days. There will not be any countdown displays or any UI for the timers. An object will be selected, a menu choice will then be selected, and the timer will start. Several events will occur at different intervals during the duration of the timer. The events will be confined to changing the material of the selected object, and calling a 1 second sound effect like a chime or a bell. If the user wants to save or end the game before all the timers are done, the start of the still running timers is to be saved to an XML file such that when the game is started again, any still running timers will have a calculation done to see if the timer is then done, where the game will change the materials appropriately. I am still trying to figure out what type of timer to use, and see also if there are any suggestions for saving and calculating times over several days. What class(es) of timers should I use? Are there any special issues I should look out for in terms of performance?

    Read the article

  • What's so bad about pointers in C++?

    - by Martin Beckett
    To continue the discussion in Why are pointers not recommended when coding with C++ Suppose you have a class that encapsulates objects which need some initialisation to be valid - like a network socket. // Blah manages some data and transmits it over a socket class socket; // forward declaration, so nice weak linkage. class blah { ... stuff TcpSocket *socket; } ~blah { // TcpSocket dtor handles disconnect delete socket; // or better, wrap it in a smart pointer } The ctor ensures that socket is marked NULL, then later in the code when I have the information to initialise the object. // initialising blah if ( !socket ) { // I know socket hasn't been created/connected // create it in a known initialised state and handle any errors // RAII is a good thing ! socket = new TcpSocket(ip,port); } // and when i actually need to use it if (socket) { // if socket exists then it must be connected and valid } This seems better than having the socket on the stack, having it created in some 'pending' state at program start and then having to continually check some isOK() or isConnected() function before every use. Additionally if TcpSocket ctor throws an exception it's a lot easier to handle at the point a Tcp connection is made rather than at program start. Obviously the socket is just an example, but I'm having a hard time thinking of when an encapsulated object with any sort of internal state shouldn't be created and initialised with new.

    Read the article

  • Fastest way to parse big json android

    - by jem88
    I've a doubt that doesn't let me sleep! :D I'm currently working with big json files, with many levels. I parse these object using the 'default' android way, so I read the response with a ByteArrayOutputStream, get a string and create a JSONObject from the string. All fine here. Now, I've to parse the content of the json to get the objects of my interest, and I really can't find a better way that parse it manually, like this: String status = jsonObject.getString("status"); Boolean isLogged = jsonObject.getBoolean("is_logged"); ArrayList<Genre> genresList = new ArrayList<Genre>(); // Get jsonObject with genres JSONObject jObjGenres = jsonObject.getJSONObject("genres"); // Instantiate an iterator on jsonObject keys Iterator<?> keys = jObjGenres.keys(); // Iterate on keys while( keys.hasNext() ) { String key = (String) keys.next(); JSONObject jObjGenre = jObjGenres.getJSONObject(key); // Create genre object Genre genre = new Genre( jObjGenre.getInt("id_genre"), jObjGenre.getString("string"), jObjGenre.getString("icon") ); genresList.add(genre); } // Get languages list JSONObject jObjLanguages = jsonObject.getJSONObject("languages"); Iterator jLangKey = jObjLanguages.keys(); List<Language> langList = new ArrayList<Language>(); while (jLangKey.hasNext()) { // Iterate on jlangKey obj String key = (String) jLangKey.next(); JSONObject jCurrentLang = (JSONObject) jObjLanguages.get(key); Language lang = new Language( jCurrentLang.getString("id_lang"), jCurrentLang.getString("name"), jCurrentLang.getString("code"), jCurrentLang.getString("active").equals("1") ); langList.add(lang); } I think this is really ugly, frustrating, timewaster, and fragile. I've seen parser like json-smart and Gson... but seems difficult to parse a json with many levels, and get the objects! But I guess that must be a better way... Any idea? Every suggestion will be really appreciated. Thanks in advance!

    Read the article

  • From TFS to Git

    - by Saeed Neamati
    I'm a .NET developer and I've used TFS (team foundation server) as my source control software many times. Good features of TFS are: Good integration with Visual Studio (so I do almost everything visually; no console commands) Easy check-out, check-in process Easy merging and conflict resolution Easy automated builds Branching Now, I want to use Git as the backbone, repository, and source control of my open source projects. My projects are in C#, JavaScript, or PHP language with MySQL, or SQL Server databases as the storage mechanism. I just used github.com's help for this purpose and I created a profile there, and downloaded a GUI for Git. Up to this part was so easy. But I'm almost stuck at going along any further. I just want to do some simple (really simple) operations, including: Creating a project on Git and mapping it to a folder on my laptop Checking out/checking in files and folders Resolving conflicts That's all I need to do now. But it seems that the GUI is not that user friendly. I expect the GUI to have a Connect To... or something like that, and then I expect a list of projects to be shown, and when I choose one, I expect to see the list of files and folders of that project, just like exploring your TFS project in Visual Studio. Then I want to be able to right click a file and select check-in... or check-out and stuff like that. Do I expect much? What should I do to easily use Git just like TFS? What am I missing here?

    Read the article

  • Using LINQ to Twitter OAuth with Windows 8

    - by Joe Mayo
    In previous posts, I explained how to use LINQ to Twitter with Windows 8, but the example was a Twitter Search, which didn’t require authentication. Much of the Twitter API requires authentication, so this post will explain how you can perform OAuth authentication with LINQ to Twitter in a Windows 8 Metro-style application. Getting Started I have earlier posts on how to create a Windows 8 app and add pages, so I’ll assume it isn’t necessary to repeat here. One difference is that I’m using Visual Studio 2012 RC and some of the terminology and/or library code might be slightly different.  Here are steps to get started: Create a new Windows metro style app, selecting the Blank App project template. Create a new Basic Page and name it OAuth.xaml.  Note: You’ll receive a prompt window for adding files and you should click Yes because those files are necessary for this demo. Add a new Basic Page named TweetPage.xaml. Open App.xaml.cs and change !rootFrame.Navigate(typeof(MainPage)) to !rootFrame.Navigate(typeof(TweetPage)). Now that the project is set up you’ll see the reason why authentication is required by setting up the TweetPage. Setting Up to Tweet a Status In this section, I’ll show you how to set up the XAML and code-behind for a tweet.  The tweet logic will check to see if the user is authenticated before performing the tweet. To tweet, I put a TextBox and Button on the XAML page. The following code omits most of the page, concentrating primarily on the elements of interest in this post: <StackPanel Grid.Row="1"> <TextBox Name="TweetTextBox" Margin="15" /> <Button Name="TweetButton" Content="Tweet" Click="TweetButton_Click" Margin="15,0" /> </StackPanel> Given the UI above, the user types the message they want to tweet, and taps Tweet. This invokes TweetButton_Click, which checks to see if the user is authenticated.  If the user is not authenticated, the app navigates to the OAuth page.  If they are authenticated, LINQ to Twitter does an UpdateStatus to post the user’s tweet.  Here’s the TweetButton_Click implementation: void TweetButton_Click(object sender, RoutedEventArgs e) { PinAuthorizer auth = null; if (SuspensionManager.SessionState.ContainsKey("Authorizer")) { auth = SuspensionManager.SessionState["Authorizer"] as PinAuthorizer; } if (auth == null || !auth.IsAuthorized) { Frame.Navigate(typeof(OAuthPage)); return; } var twitterCtx = new TwitterContext(auth); Status tweet = twitterCtx.UpdateStatus(TweetTextBox.Text); new MessageDialog(tweet.Text, "Successful Tweet").ShowAsync(); } For authentication, this app uses PinAuthorizer, one of several authorizers available in the LINQ to Twitter library. I’ll explain how PinAuthorizer works in the next section. What’s important here is that LINQ to Twitter needs an authorizer to post a Tweet. The code above checks to see if a valid authorizer is available. To do this, it uses the SuspensionManager class, which is part of the code generated earlier when creating OAuthPage.xaml. The SessionState property is a Dictionary<string, object> and I’m using the Authorizer key to store the PinAuthorizer.  If the user previously authorized during this session, the code reads the PinAuthorizer instance from SessionState and assigns it to the auth variable. If the user is authorized, auth would not be null and IsAuthorized would be true. Otherwise, the app navigates the user to OAuthPage.xaml, which I’ll discuss in more depth in the next section. When the user is authorized, the code passes the authorizer, auth, to the TwitterContext constructor. LINQ to Twitter uses the auth instance to build OAuth signatures for each interaction with Twitter.  You no longer need to write any more code to make this happen. The code above accepts the tweet just posted in the Status instance, tweet, and displays a message with the text to confirm success to the user. You can pull the PinAuthorizer instance from SessionState, instantiate your TwitterContext, and use it as you need. Just remember to make sure you have a valid authorizer, like the code above. As shown earlier, the code navigates to OAuthPage.xaml when a valid authorizer isn’t available. The next section shows how to perform the authorization upon arrival at OAuthPage.xaml. Doing the OAuth Dance This section shows how to authenticate with LINQ to Twitter’s built-in OAuth support. From the user perspective, they must be navigated to the Twitter authentication page, add credentials, be navigated to a Pin number page, and then enter that Pin in the Windows 8 application. The following XAML shows the relevant elements that the user will interact with during this process. <StackPanel Grid.Row="2"> <WebView x:Name="OAuthWebBrowser" HorizontalAlignment="Left" Height="400" Margin="15" VerticalAlignment="Top" Width="700" /> <TextBlock Text="Please perform OAuth process (above), enter Pin (below) when ready, and tap Authenticate:" Margin="15,15,15,5" /> <TextBox Name="PinTextBox" Margin="15,0,15,15" Width="432" HorizontalAlignment="Left" IsEnabled="False" /> <Button Name="AuthenticatePinButton" Content="Authenticate" Margin="15" IsEnabled="False" Click="AuthenticatePinButton_Click" /> </StackPanel> The WebView in the code above is what allows the user to see the Twitter authentication page. The TextBox is for entering the Pin, and the Button invokes code that will take the Pin and allow LINQ to Twitter to complete the authentication process. As you can see, there are several steps to OAuth authentication, but LINQ to Twitter tries to minimize the amount of code you have to write. The two important parts of the code to make this happen are the part that starts the authentication process and the part that completes the authentication process. The following code, from OAuthPage.xaml.cs, shows a couple events that are instrumental in making this process happen: public OAuthPage() { this.InitializeComponent(); this.Loaded += OAuthPage_Loaded; OAuthWebBrowser.LoadCompleted += OAuthWebBrowser_LoadCompleted; } The OAuthWebBrowser_LoadCompleted event handler enables UI controls when the browser is done loading – notice that the TextBox and Button in the previous XAML have their IsEnabled attributes set to False. When the Page.Loaded event is invoked, the OAuthPage_Loaded handler starts the OAuth process, shown here: void OAuthPage_Loaded(object sender, RoutedEventArgs e) { auth = new PinAuthorizer { Credentials = new InMemoryCredentials { ConsumerKey = "", ConsumerSecret = "" }, UseCompression = true, GoToTwitterAuthorization = pageLink => Dispatcher.RunAsync(CoreDispatcherPriority.Normal, () => OAuthWebBrowser.Navigate(new Uri(pageLink, UriKind.Absolute))) }; auth.BeginAuthorize(resp => Dispatcher.RunAsync(CoreDispatcherPriority.Normal, () => { switch (resp.Status) { case TwitterErrorStatus.Success: break; case TwitterErrorStatus.RequestProcessingException: case TwitterErrorStatus.TwitterApiError: new MessageDialog(resp.Error.ToString(), resp.Message).ShowAsync(); break; } })); } The PinAuthorizer, auth, a field of this class instantiated in the code above, assigns keys to the Credentials property. These are credentials that come from registering an application with Twitter, explained in the LINQ to Twitter documentation, Securing Your Applications. Notice how I use Dispatcher.RunAsync to marshal the web browser navigation back onto the UI thread. Internally, LINQ to Twitter invokes the lambda expression assigned to GoToTwitterAuthorization when starting the OAuth process.  In this case, we want the WebView control to navigate to the Twitter authentication page, which is defined with a default URL in LINQ to Twitter and passed to the GoToTwitterAuthorization lambda as pageLink. Then you need to start the authorization process by calling BeginAuthorize. This starts the OAuth dance, running asynchronously.  LINQ to Twitter invokes the callback assigned to the BeginAuthorize parameter, allowing you to take whatever action you need, based on the Status of the response, resp. As mentioned earlier, this is where the user performs the authentication process, enters the Pin, and clicks authenticate. The handler for authenticate completes the process and saves the authorizer for subsequent use by the application, as shown below: void AuthenticatePinButton_Click(object sender, RoutedEventArgs e) { auth.CompleteAuthorize( PinTextBox.Text, completeResp => Dispatcher.RunAsync(CoreDispatcherPriority.Normal, () => { switch (completeResp.Status) { case TwitterErrorStatus.Success: SuspensionManager.SessionState["Authorizer"] = auth; Frame.Navigate(typeof(TweetPage)); break; case TwitterErrorStatus.RequestProcessingException: case TwitterErrorStatus.TwitterApiError: new MessageDialog(completeResp.Error.ToString(), completeResp.Message).ShowAsync(); break; } })); } The PinAuthorizer CompleteAuthorize method takes two parameters: Pin and callback. The Pin is from what the user entered in the TextBox prior to clicking the Authenticate button that invoked this method. The callback handles the response from completing the OAuth process. The completeResp holds information about the results of the operation, indicated by a Status property of type TwitterErrorStatus. On success, the code assigns auth to SessionState. You might remember SessionState from the previous description of TweetPage – this is where the valid authorizer comes from. After saving the authorizer, the code navigates the user back to TweetPage, where they can type in a message, click the Tweet button, and observe that they have successfully tweeted. Summary You’ve seen how to get started with using LINQ to Twitter in a Metro-style application. The generated code contained a SuspensionManager class with way to manage information across multiple pages via its SessionState property. You also saw how LINQ to Twitter performs authorization in two steps of starting the process and completing the process when the user provides a Pin number. Remember to marshal callback thread back onto the UI – you saw earlier how to use Dispatcher.RunAsync to accomplish this. There were a few steps in the process, but LINQ to Twitter did minimize the amount of code you needed to write to make it happen. You can download the MetroOAuthDemo.zip sample on the LINQ to Twitter Samples Page.   @JoeMayo

    Read the article

  • Problem with Ogre::Camera lookAt function when target is directly below.

    - by PigBen
    I am trying to make a class which controls a camera. It's pretty basic right now, it looks like this: class HoveringCameraController { public: void init(Ogre::Camera & camera, AnimatedBody & target, Ogre::Real height); void update(Ogre::Real time_delta); private: Ogre::Camera * camera_; AnimatedBody * target_; Ogre::Real height_; }; HoveringCameraController.cpp void HoveringCameraController::init(Ogre::Camera & camera, AnimatedBody & target, Ogre::Real height) { camera_ = &camera; target_ = &target; height_ = height; update(0.0); } void HoveringCameraController::update(Ogre::Real time_delta) { auto position = target_->getPosition(); position.y += height_; camera_->setPosition(position); camera_->lookAt(target_->getPosition()); } AnimatedBody is just a class that encapsulates an entity, it's animations and a scene node. The getPosition function is simply forwarded to it's scene node. What I want(for now) is for the camera to simply follow the AnimatedBody overhead at the distance given(the height parameter), and look down at it. It follows the object around, but it doesn't look straight down, it's tilted quite a bit in the positive Z direction. Does anybody have any idea why it would do that? If I change this line: position.y += height_; to this: position.x += height_; or this: position.z += height_; it does exactly what I would expect. It follows the object from the side or front, and looks directly at it.

    Read the article

  • Auto-hydrate your objects with ADO.NET

    - by Jake Rutherford
    Recently while writing the monotonous code for pulling data out of a DataReader to hydrate some objects in an application I suddenly wondered "is this really necessary?" You've probably asked yourself the same question, and many of you have: - Used a code generator - Used a ORM such as Entity Framework - Wrote the code anyway because you like busy work     In most of the cases I've dealt with when making a call to a stored procedure the column names match up with the properties of the object I am hydrating. Sure that isn't always the case, but most of the time it's 1 to 1 mapping.  Given that fact I whipped up the following method of hydrating my objects without having write all of the code. First I'll show the code, and then explain what it is doing.      /// <summary>     /// Abstract base class for all Shared objects.     /// </summary>     /// <typeparam name="T"></typeparam>     [Serializable, DataContract(Name = "{0}SharedBase")]     public abstract class SharedBase<T> where T : SharedBase<T>     {         private static List<PropertyInfo> cachedProperties;         /// <summary>         /// Hydrates derived class with values from record.         /// </summary>         /// <param name="dataRecord"></param>         /// <param name="instance"></param>         public static void Hydrate(IDataRecord dataRecord, T instance)         {             var instanceType = instance.GetType();                         //Caching properties to avoid repeated calls to GetProperties.             //Noticable performance gains when processing same types repeatedly.             if (cachedProperties == null)             {                 cachedProperties = instanceType.GetProperties().ToList();             }                         foreach (var property in cachedProperties)             {                 if (!dataRecord.ColumnExists(property.Name)) continue;                 var ordinal = dataRecord.GetOrdinal(property.Name);                 var isNullable = property.PropertyType.IsGenericType &&                                  property.PropertyType.GetGenericTypeDefinition() == typeof (Nullable<>);                 var isNull = dataRecord.IsDBNull(ordinal);                 var propertyType = property.PropertyType;                 if (isNullable)                 {                     if (!string.IsNullOrEmpty(propertyType.FullName))                     {                         var nullableType = Type.GetType(propertyType.FullName);                         propertyType = nullableType != null ? nullableType.GetGenericArguments()[0] : propertyType;                     }                 }                 switch (Type.GetTypeCode(propertyType))                 {                     case TypeCode.Int32:                         property.SetValue(instance,                                           (isNullable && isNull) ? (int?) null : dataRecord.GetInt32(ordinal), null);                         break;                     case TypeCode.Double:                         property.SetValue(instance,                                           (isNullable && isNull) ? (double?) null : dataRecord.GetDouble(ordinal),                                           null);                         break;                     case TypeCode.Boolean:                         property.SetValue(instance,                                           (isNullable && isNull) ? (bool?) null : dataRecord.GetBoolean(ordinal),                                           null);                         break;                     case TypeCode.String:                         property.SetValue(instance, (isNullable && isNull) ? null : isNull ? null : dataRecord.GetString(ordinal),                                           null);                         break;                     case TypeCode.Int16:                         property.SetValue(instance,                                           (isNullable && isNull) ? (int?) null : dataRecord.GetInt16(ordinal), null);                         break;                     case TypeCode.DateTime:                         property.SetValue(instance,                                           (isNullable && isNull)                                               ? (DateTime?) null                                               : dataRecord.GetDateTime(ordinal), null);                         break;                 }             }         }     }   Here is a class which utilizes the above: [Serializable] [DataContract] public class foo : SharedBase<foo> {     [DataMember]     public int? ID { get; set; }     [DataMember]     public string Name { get; set; }     [DataMember]     public string Description { get; set; }     [DataMember]     public string Subject { get; set; }     [DataMember]     public string Body { get; set; }            public foo(IDataRecord record)     {         Hydrate(record, this);                }     public foo() {} }   Explanation: - Class foo inherits from SharedBase specifying itself as the type. (NOTE SharedBase is abstract here in the event we want to provide additional methods which could be overridden by the instance class) public class foo : SharedBase<foo> - One of the foo class constructors accepts a data record which then calls the Hydrate method on SharedBase passing in the record and itself. public foo(IDataRecord record) {      Hydrate(record, this); } - Hydrate method on SharedBase will use reflection on the object passed in to determine its properties. At the same time, it will effectively cache these properties to avoid repeated expensive reflection calls public static void Hydrate(IDataRecord dataRecord, T instance) {      var instanceType = instance.GetType();      //Caching properties to avoid repeated calls to GetProperties.      //Noticable performance gains when processing same types repeatedly.      if (cachedProperties == null)      {           cachedProperties = instanceType.GetProperties().ToList();      } . . . - Hydrate method on SharedBase will iterate each property on the object and determine if a column with matching name exists in data record foreach (var property in cachedProperties) {      if (!dataRecord.ColumnExists(property.Name)) continue;      var ordinal = dataRecord.GetOrdinal(property.Name); . . . NOTE: ColumnExists is an extension method I put on IDataRecord which I’ll include at the end of this post. - Hydrate method will determine if the property is nullable and whether the value in the corresponding column of the data record has a null value var isNullable = property.PropertyType.IsGenericType && property.PropertyType.GetGenericTypeDefinition() == typeof (Nullable<>); var isNull = dataRecord.IsDBNull(ordinal); var propertyType = property.PropertyType; . . .  - If Hydrate method determines the property is nullable it will determine the underlying type and set propertyType accordingly - Hydrate method will set the value of the property based upon the propertyType   That’s it!!!   The magic here is in a few places. First, you may have noticed the following: public abstract class SharedBase<T> where T : SharedBase<T> This says that SharedBase can be created with any type and that for each type it will have it’s own instance. This is important because of the static members within SharedBase. We want this behavior because we are caching the properties for each type. If we did not handle things in this way only 1 type could be cached at a time, or, we’d need to create a collection that allows us to cache the properties for each type = not very elegant.   Second, in the constructor for foo you may have noticed this (literally): public foo(IDataRecord record) {      Hydrate(record, this); } I wanted the code for auto-hydrating to be as simple as possible. At first I wasn’t quite sure how I could call Hydrate on SharedBase within an instance of the class and pass in the instance itself. Fortunately simply passing in “this” does the trick. I wasn’t sure it would work until I tried it out, and fortunately it did.   So, to actually use this feature when utilizing ADO.NET you’d do something like the following:        public List<foo> GetFoo(int? fooId)         {             List<foo> fooList;             const string uspName = "usp_GetFoo";             using (var conn = new SqlConnection(_dbConnection))             using (var cmd = new SqlCommand(uspName, conn))             {                 cmd.CommandType = CommandType.StoredProcedure;                 cmd.Parameters.Add(new SqlParameter("@FooID", SqlDbType.Int)                                        {Direction = ParameterDirection.Input, Value = fooId});                 conn.Open();                 using (var dr = cmd.ExecuteReader())                 {                     fooList= (from row in dr.Cast<DbDataRecord>()                                             select                                                 new foo(row)                                            ).ToList();                 }             }             return fooList;         }   Nice! Instead of having line after line manually assigning values from data record to an object you simply create a new instance and pass in the data record. Note that there are certainly instances where columns returned from stored procedure do not always match up with property names. In this scenario you can still use the above method and simply do your manual assignments afterward.

    Read the article

  • How to Generate XBRL from Oracle's Applications

    - by Theresa Hickman
    I've been getting quite a few emails asking how Oracle supports XBRL. As of June 2011, all public US companies were required to produce XBRL-based financial statements to the SEC. The latest XBRL 2.1 specifications are supported by Oracle Hyperion Disclosure Management, which supports the XBRL tagging of financial statements as well as the disclosures and footnotes within your 10K and 10Q filings. Because many of our customers use Hyperion Financial Management (HFM) for their consolidation needs, they simply generate XBRL statements from their consolidated financial results. Click here to watch a 3 min demo about this cool tool. Question: What if you don't use Hyperion Financial Management, and you only use E-Business Suite General Ledger or PeopleSoft General Ledger? Answer: No problem, all you need is Hyperion Disclosure Management to generate XBRL from your general ledger. Here are the steps: Upload the XBRL taxonomy from the SEC or XBRL website into Hyperion Disclosure Management. Publish your financial statements out of general ledger to Excel. Perform the XBRL tag mapping from the Excel output to Hyperion Disclosure Management.

    Read the article

  • Laggy Graphics After Upgrade to Ubuntu 13.10

    - by John bracciano
    I noticed some issues concerning the graphics after I upgraded from Ubuntu 13.04 to 13.10. More specifically: Dash tends to be more "laggy" than it used to be. When I switch desktops you can clearly see the screen going frame by frame. Menu bar is also "laggy". When I open GIMP, the menu bar is useless after a while - it takes seconds to refresh. The problems seem to appear after some seconds of use. When I boot the system everything works fine, until some more graphics-intensive program starts (Firefox, GIMP, etc). /usr/lib/nux/unity_support_test -p reports: OpenGL vendor string: Intel Open Source Technology Center OpenGL renderer string: Mesa DRI Intel(R) Ivybridge Mobile OpenGL version string: 3.0 Mesa 9.2.1 Not software rendered: yes Not blacklisted: yes GLX fbconfig: yes GLX texture from pixmap: yes GL npot or rect textures: yes GL vertex program: yes GL fragment program: yes GL vertex buffer object: yes GL framebuffer object: yes GL version is 1.4+: yes Unity 3D supported: yes I use an Asus Zenbook UX31A with: Intel® Core i5 Ivy Bridge Integrated Intel® HD Graphics 4000 Intel® HM76 Express Chipset Thank you in advance for any answer!

    Read the article

  • Deferred Rendering With Diffuse,Specular, and Normal maps

    - by John
    I have been reading up on deferred rendering and I am trying to implement a renderer using the Sponza atrium model, which can be found here, as my sandbox.Note I am also using OpenGL 3.3 and GLSL. I am loading the model from a Wavefront OBJ file using Assimp. I extract all geometry information including tangents and bitangents. For all the aiMaterials,I extract the following information which essentially comes from the sponza.mtl file. Ambient/Diffuse/Specular/Emissive Reflectivity Coefficients(Ka,Kd,Ks,Ke) Shininess Diffuse Map Specular Map Normal Map I understand that I must render vertex attributes such as position ,normals,texture coordinates to textures as well as depth for the second render pass. A lot of resources mention putting colour information into a g-buffer in the initial render pass but do you not require the diffuse,specular and normal maps and therefore lights to determine the fragment colour? I know that doesnt make since sense because lighting should be done in the second render pass. In terms of normal mapping, do you essentially just pass the tangent,bitangents, and normals into g-buffers and then construct the tangent matrix and apply it to the sampled normal from the normal map. Ultimately, I would like to know how to incorporate this material information into my deferred renderer.

    Read the article

  • Change a Foreign Action's Display Text

    - by Geertjan
    I want the display text on an Action on a Node to show something about the underlying object. But the Action is registered somewhere in the layer (i.e., in the registry), i.e., I have no control over it. How do I change the display text in this scenario? Here's how. Below I look in the Actions/Events folder, iterate through all the Actions registered there, look for an Action with display text starting with "Edit", change it to display something from the underlying object, wrap a new Action around that Action, build up a new list of Actions, and return those (together with all the other Actions in that folder) from "getActions" on my Node: @Override public Action[] getActions(boolean context) { List<Action> newEventActions = new ArrayList<Action>(); List<? extends Action> eventActions = Utilities.actionsForPath("Actions/Events"); for (final Action action : eventActions) { String value = action.getValue(Action.NAME).toString(); if (value.startsWith("Edit")) { Action editAction = new AbstractAction("Edit " + getLookup().lookup(Event.class).getPlace()) { @Override public void actionPerformed(ActionEvent e) { action.actionPerformed(e); } }; newEventActions.add(editAction); } else { newEventActions.add(action); } } return newEventActions.toArray(new Action[eventActions.size()]); } If someone knows of a better way, please let me know.

    Read the article

  • ODI 11g – Faster Files

    - by David Allan
    Deep in the trenches of ODI development I raised my head above the parapet to read a few odds and ends and then think why don’t they know this? Such as this article here – in the past customers (see forum) were told to use a staging route which has a big overhead for large files. This KM is an example of the great extensibility capabilities of ODI, its quite simple, just a new KM that; improves the out of the box experience – just build the mapping and the appropriate KM is used improves out of the box performance for file to file data movement. This improvement for out of the box handling for File to File data integration cases (from the 11.1.1.5.2 companion CD and on) dramatically speeds up the file integration handling. In the past I had seem some consultants write perl versions of the file to file integration case, now Oracle ships this KM to fill the gap. You can find the documentation for the IKM here. The KM uses pure java to perform the integration, using java.io classes to read and write the file in a pipe – it uses java threading in order to super-charge the file processing, and can process several source files at once when the datastore's resource name contains a wildcard. This is a big step for regular file processing on the way to super-charging big data files using Hadoop – the KM works with the lightweight agent and regular filesystems. So in my design below transforming a bunch of files, by default the IKM File to File (Java) knowledge module was assigned. I pointed the KM at my JDK (since the KM generates and compiles java), and I also increased the thread count to 2, to take advantage of my 2 processors. For my illustration I transformed (can also filter if desired) and moved about 1.3Gb with 2 threads in 140 seconds (with a single thread it took 220 seconds) - by no means was this on any super computer by the way. The great thing here is that it worked well out of the box from the design to the execution without any funky configuration, plus, and a big plus it was much faster than before, So if you are doing any file to file transformations, check it out!

    Read the article

  • Radeon 9200 Driver?

    - by usuhh86
    I've been using Linux for a while, but haven't really done more than installed programs. I have Ubuntu 12.04 with an ATI Radeon 9200 graphics card. Ubuntu didn't recognize it during the install. All I want is a driver. Preferably the proprietary driver, but I'll settle for an open-source one if I need to. I went to the AMD support website, and whenever I click on the download link for "ATI Proprietary Linux x86 Display Driver 8.28.8" I get redirected to the main AMD website. I tried this on Firefox, Chrome, even booted up my netbook and tried it on IE9. Does anybody have a download link? And if not, a link to download an open-source alternative? Any help will be greatly appreciated. I'm pretty much still a newbie at this. OpenGL vendor string: Tungsten Graphics, Inc. OpenGL renderer string: Mesa DRI R200 (RV280 5961) x86/MMX/SSE2 TCL DRI2 OpenGL version string: 1.3 Mesa 8.0.2 Not software rendered: yes Not blacklisted: yes GLX fbconfig: yes GLX texture from pixmap: yes GL npot or rect textures: yes GL vertex program: yes GL fragment program: no GL vertex buffer object: yes GL framebuffer object: yes GL version is 1.4+: no Unity 3D supported: no

    Read the article

  • Android Game Development problem with Speed = Distance / Time

    - by Charlton Santana
    I have been coding speed for an object. I have made it so the object will move from one end of the screen to another at a speed depending on the screen size, at the monemt I have made it so it will take one second to pass the screen. So i have worked out the speed in code but when I go to assign the speed it tells me to force close and i do not understand why. Here is the code: MainGame Code: @Override protected void onDraw(Canvas canvas) { setBlockSpeed(getWidth()); } private int blockSpeed; private void setBlockSpeed(int screenWidth){ Log.d(TAG, "screenWidth " + screenWidth); blockSpeed = screenWidth / 100; // 100 is the FPS.. i want it to take 1 second to pass the screen Math.round(blockSpeed); // to make it a whole number block.speed = blockSpeed; // this is line 318!! if i put eg block.speed = 8; it still tells me to force close } Block.java Code: public int speed; public void draw(Canvas canvas) { canvas.drawBitmap(bitmap, x - (bitmap.getWidth() / 2), y - (bitmap.getHeight() / 2), null); if(dontmove == 0){ this.x -= speed; // if it was eg this.x -= 18; it would not have an error } } The exception 06-08 13:22:34.315: E/AndroidRuntime(2801): FATAL EXCEPTION: Thread-11 06-08 13:22:34.315: E/AndroidRuntime(2801): java.lang.NullPointerException 06-08 13:22:34.315: E/AndroidRuntime(2801): at com.charltonsantana.game.MainGame.setBlockSpeed(MainGame.java:318) 06-08 13:22:34.315: E/AndroidRuntime(2801): at com.charltonsantana.game.MainGame.onDraw(MainGame.java:351) 06-08 13:22:34.315: E/AndroidRuntime(2801): at com.charltonsantana.game.MainThread.run(MainThread.java:64)

    Read the article

  • How do I implement movement in a WPF Adventure game?

    - by ZeroPhase
    I'm working on making a short WPF adventure game. The only major hurdle I have right now is how to animate objects on the screen correctly. I've experimented with DoubleAnimation and ThicknessAnimation both enable movement of the character, but the speed is a bit erratic. The objects I'm trying to move around are labels in a grid, I'm checking the mouse's position in terms of the canvas I have the grid in. Does anyone have any suggestions for coding the movement, while still allowing mouse clicks to pick up items when needed? It would be nice if I could continue using the Visual Studio GUI Editor. By the way, I'm fine with scrapping labels in a grid for a more ideal object to manipulate. Here's my movement code: ThicknessAnimation ta = new ThicknessAnimation(); The event handling movement: private void Hansel_MouseLeftButtonDown(object sender, MouseButtonEventArgs e) { ta.FillBehavior = FillBehavior.HoldEnd; ta.From = Hansel.Margin; double newX = Mouse.GetPosition(PlayArea).X; double newY = Mouse.GetPosition(PlayArea).Y; if (newX < Convert.ToDouble(Hansel.Margin.Left)) { //newX = -1 * newX; ta.To = new Thickness(0, newY, newX, 0); } else if (newY < Convert.ToDouble(Hansel.Margin.Top)) { newY = -1 * newY; } else { ta.To = new Thickness(newX, newY, 0, 0); } ta.Duration = new Duration(TimeSpan.FromSeconds(2)); Hansel.BeginAnimation(Grid.MarginProperty, ta); } ScreenShot with annotations: http://i1118.photobucket.com/albums/k608/sealclubberr/clickToMove_zps9d4a33cc.png ScreenShot with example movement: http://i1118.photobucket.com/albums/k608/sealclubberr/clickToMove_zps51f2359f.jpg

    Read the article

  • "Siebel2FusionCRM Integration" solution by ec4u (D)

    - by Richard Lefebvre
    ec4u, a CRM System Integration leader based in Germany and Switzerland, and an historical Oracle/Siebel partner, offers a complete "Siebel2FusionCRM Integration" solution, based on tools methodology and services. ec4u Siebel2FusionCRM Integration solution's main objectives are: Integration between Siebel (on-premise) and Fusion CRM / Marketing (“in the cloud”) Accounts, Contacts and Addresses are maintained by Sales in Siebel CRM and synchronized in real-time into Fusion CRM / Marketing CDM Processing ensures clean data for marketing campaigns (validation and deduplication) Create E-Mail marketing campaigns and newsletters in Fusion The solution features: Upsert processes figure out what information needs to be updated, inserted or terminated (deleted). However, as Siebel is the data master, it is still a one-way synchronization. Handle deleted or nullified information by terminating them in Fusion CRM (set start and end date to define the validity period) Initial load and real-time synchronization use the same processes Invocations/Operations can be repeated due to no transactional support from Fusion web services Tagging sub entries in case of 1 to N mapping (Example: Telephone number is one simple field in Siebel but in Fusion you can have multiple telephone numbers in a sub table) E-Mail-Notification in case of any error (containing error message, instance number, detailed payload) Schematron Validation Interested? Looking for more details or a partnership with ec4u for a "Siebel2FusionCRM Integration" project? Contact: Gregor Bublitz, Director Expert Services ([email protected])

    Read the article

< Previous Page | 527 528 529 530 531 532 533 534 535 536 537 538  | Next Page >