Search Results

Search found 35149 results on 1406 pages for 'yield return'.

Page 597/1406 | < Previous Page | 593 594 595 596 597 598 599 600 601 602 603 604  | Next Page >

  • IValidatableObject vs Single Responsibility

    - by Boris Yankov
    I like the extnesibility point of MVC, allowing view models to implement IValidatableObject, and add custom validation. I try to keep my Controllers lean, having this code be the only validation logic: if (!ModelState.IsValid) return View(loginViewModel); For example a login view model implements IValidatableObject, gets ILoginValidator object via constructor injection: public interface ILoginValidator { bool UserExists(string email); bool IsLoginValid(string userName, string password); } It seems that Ninject, injecting instances in view models isn't really a common practice, may be even an anti-pattern? Is this a good approach? Is there a better one?

    Read the article

  • Cron job fails for any time other than default * * * * *

    - by Raghu
    On Ubuntu 11.10 (Oneiric Ocelot), my cron job run fine if I use the default * * * * * But if I want it to run at 17 hrs or any other time, it never runs. My settings are: 00 17 * * * wget http://www.abc.com/a.php I also tried: 00 17 * * * root wget http://www.abc.com/a.php I also tried specifying the path. There is a carriage return, and I'm logged in as root Here is my complete crontab: TZ=Australia/Sydney 22 7 * * * /usr/bin/wget http://www.abc.com/a.php 22 7 * * * /bin/date >> /tmp/date.txt ----the out put is as follws: root@Scrunch:~# sudo crontab -l -u root 55 12 * * * date >>/tmp/crontest.txt root@Scrunch:~# Why is the terminal displaying so many blank lines after outputting the crontab entries? do you suspect unnecessary carriage lines are given....And i have not given any entries any other cron spaces like .d,/daily eyc.,

    Read the article

  • Finding the contact point with SAT

    - by Kai
    The Separating Axis Theorem (SAT) makes it simple to determine the Minimum Translation Vector, i.e., the shortest vector that can separate two colliding objects. However, what I need is the vector that separates the objects along the vector that the penetrating object is moving (i.e. the contact point). I drew a picture to help clarify. There is one box, moving from the before to the after position. In its after position, it intersects the grey polygon. SAT can easily return the MTV, which is the red vector. I am looking to calculate the blue vector. My current solution performs a binary search between the before and after positions until the length of the blue vector is known to a certain threshold. It works but it's a very expensive calculation since the collision between shapes needs to be recalculated every loop. Is there a simpler and/or more efficient way to find the contact point vector?

    Read the article

  • smartctl not returning on HBA that's secure-erasing a different drive

    - by Stu2000
    Whenever I run smartctl -i /dev/sd* where * is a drive that is plugged into the same host bus adapter as another drive that is currently being erased with an hdparm secure erase command, the smart command will just 'hang' and not return (blocked) until the erasure of the other drive is finished. To make matters worse you can't cntrl-c out of it. Has anyone else had this issue? Is there another way to retrieve smart data from a drive, which doesn't block? I noticed that I can still use the udevadm command to retrieve the serial and model of the drive which is useful but doesn't appear to have any smart data. Any information relating to this matter is appreciated, especially if you can tell me another way to retrieve the S.M.A.R.T data that might work. Regards, Stuart

    Read the article

  • What is the exception in java code? [closed]

    - by Karandeep Singh
    This java code is for reverse the string but it returning concat null with returned string. import java.util.*; import java.util.logging.Level; import java.util.logging.Logger; public class Practice { public static void main(String[] args) { String str = ""; try { str = reverse("Singh"); } catch (Exception ex) { Logger.getLogger(Practice.class.getName()).log(Level.SEVERE, null, ex); System.out.print(ex.getMessage()); }finally{ System.out.println(str); } } public static String reverse(String str) throws Exception{ String temp = null; if(str.length()<=0){ throw new Exception("empty"); }else{ for(int i=str.length()-1;i>=0;i--){ temp+=str.charAt(i); } } return temp.trim(); } } Output: nullhgniS

    Read the article

  • How to Use RDA to Generate WLS Thread Dumps At Specified Intervals?

    - by Daniel Mortimer
    Introduction There are many ways to generate a thread dump of a WebLogic Managed Server. For example, take a look at: Taking Thread Dumps - [an excellent blog post on the Middleware Magic site]or  Different ways to take thread dumps in WebLogic Server (Document 1098691.1) There is another method - use Remote Diagnostic Agent! The solution described below is not documented, but it is relatively straightforward to execute. One advantage of using RDA to collect the thread dumps is RDA will also collect configuration, log files, network, system, performance information at the same time. Instructions 1. Not familiar with Remote Diagnostic Agent? Take a look at my previous blog "Resolve SRs Faster Using RDA - Find the Right Profile" 2. Choose a profile, which includes the WebLogic Server data collection modules (for example the profile "WebLogicServer"). At RDA setup time you should see the prompt below: ------------------------------------------------------------------------------- S301WLS: Collects Oracle WebLogic Server Information ------------------------------------------------------------------------------- Enter the location of the directory where the domains to analyze are located (For example in UNIX, <BEA Home>/user_projects/domains or <Middleware Home>/user_projects/domains) Hit 'Return' to accept the default (/oracle/11AS/Middleware/user_projects/domains) > For a successful WLS connection, ensure that the domain Admin Server is up and running. Data Collection Type:   1  Collect for a single server (offline mode)   2  Collect for a single server (using WLS connection)   3  Collect for multiple servers (using WLS connection) Enter the item number Hit 'Return' to accept the default (1) > 2 Choose option 2 or 3. Note: Collect for a single server or multiple servers using WLS connection means that RDA will attempt to connect to execute online WLST commands against the targeted server(s). The thread dumps are collected using the WLST function - "threadDumps()". If WLST cannot connect to the managed server, RDA will proceed to collect other data and ignore the request to collect thread dumps. If in the final output you see no Thread Dump menu item, then it's likely that the managed server is in a state which prevents new connections to it. If faced with this scenario, you would have to employ alternative methods for collecting thread dumps. 3. The RDA setup will create a setup.cfg file in the RDA_HOME directory. Open this file in an editor. You will find the following parameters which govern the number of thread dumps and thread dump interval. #N.Number of thread dumps to capture WREQ_THREAD_DUMP=10 #N.Thread dump interval WREQ_THREAD_DUMP_INTERVAL=5000 The example lines above show the default settings. In other words, RDA will collect 10 thread dumps at 5000 millisecond (5 second) intervals. You may want to change this to something like: #N.Number of thread dumps to capture WREQ_THREAD_DUMP=10 #N.Thread dump interval WREQ_THREAD_DUMP_INTERVAL=30000 However, bear in mind, that such change will increase the total amount of time it takes for RDA to complete its run. 4. Once you are happy with the setup.cfg, run RDA. RDA will collect, render, generate and package all files in the output directory. 5. For ease of viewing, open up the RDA Start html file - "xxxx__start.htm". The thread dumps can be found under the WLST Collections for the target managed server(s). See screenshots belowScreenshot 1:RDA Start Page - Main Index Screenshot 2: Managed Server Sub Index Screenshot 3: WLST Collections Screenshot 4: Thread Dump Page - List of dump file links Screenshot 5: Thread Dump Dat File Link Additional Comment: A) You can view the thread dump files within the RDA Start Page framework, but most likely you will want to download the dat files for in-depth analysis via thread dump analysis tools such as: Thread Dump Analyzer -  Samurai - a GUI based tail , thread dump analysis tool If you are new to thread dump analysis - take a look at this recorded Support Advisor Webcast  Oracle WebLogic Server: Diagnosing Performance Issues through Java Thread Dumps[Slidedeck from webcast in PDF format]B) I have logged a couple of enhancement requests for the RDA Development Team to consider: Add timestamp to dump file links, dat filename and at the top of the body of the dat file Package the individual thread dumps in a zip so all dump files can be conveniently downloaded in one go.

    Read the article

  • Algorithm to reduce a bitmap mask to a list of rectangles?

    - by mos
    Before I go spend an afternoon writing this myself, I thought I'd ask if there was an implementation already available --even just as a reference. The first image is an example of a bitmap mask that I would like to turn into a list of rectangles. A bad algorithm would return every set pixel as a 1x1 rectangle. A good algorithm would look like the second image, where it returns the coordinates of the orange and red rectangles. The fact that the rectangles overlap don't matter, just that there are only two returned. To summarize, the ideal result would be these two rectangles (x, y, w, h): [ { 3, 1, 2, 6 }, { 1, 3, 6, 2 } ]

    Read the article

  • What is the best currency API out there?

    - by YouBook
    I may be asking an odd question, but webmasters seem like a decent place to post this. I'm in the search for an accurate, easy API (such as using JSON or XML) so I can use my web application. I've been trying Google's secret API, but it's dependency isn't that good because of parsing the string (weird JSON format that PHP sometimes return incorrect data or truncates the string due to parsing error, but Google's API is fair but can be improved. So, all I'm asking is your current best currency API out there, I want to have them to include documented API so I can use it with PHP. Cheers.

    Read the article

  • SQL SERVER – Finding Latch Statistics

    - by pinaldave
    Last month I wrote SQL Server Wait Types and Queues series SQL SERVER – Summary of Month – Wait Type – Day 28 of 28. I had great fun to write the series. I learned a lot and I felt this has created some deep interest on the subject with others. I recently received very interesting question from one of the reader after reading SQL SERVER – PAGELATCH_DT, PAGELATCH_EX, PAGELATCH_KP, PAGELATCH_SH, PAGELATCH_UP – Wait Type – Day 12 of 28 that if they can know what kind of latches are waiting and what is their count. Absolutely! SQL Server team has already provided DMV which does the same. -- Latch SELECT * FROM sys.dm_os_latch_stats ORDER BY wait_time_ms DESC Above script will return you details about how many latches were waiting for how long. After going over this script I feel like going deep into the subject further. I will post a blog post on the subject soon. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • WIF, ADFS 2 and WCF&ndash;Part 6: Chaining multiple Token Services

    - by Your DisplayName here!
    See the previous posts first. So far we looked at the (simpler) scenario where a client acquires a token from an identity provider and uses that for authentication against a relying party WCF service. Another common scenario is, that the client first requests a token from an identity provider, and then uses this token to request a new token from a Resource STS or a partner’s federation gateway. This sounds complicated, but is actually very easy to achieve using WIF’s WS-Trust client support. The sequence is like this: Request a token from an identity provider. You use some “bootstrap” credential for that like Windows integrated, UserName or a client certificate. The realm used for this request is the identifier of the Resource STS/federation gateway. Use the resulting token to request a new token from the Resource STS/federation gateway. The realm for this request would be the ultimate service you want to talk to. Use this resulting token to authenticate against the ultimate service. Step 1 is very much the same as the code I have shown in the last post. In the following snippet, I use a client certificate to get a token from my STS: private static SecurityToken GetIdPToken() {     var factory = new WSTrustChannelFactory(         new CertificateWSTrustBinding(SecurityMode.TransportWithMessageCredential,         idpEndpoint);     factory.TrustVersion = TrustVersion.WSTrust13;       factory.Credentials.ClientCertificate.SetCertificate(         StoreLocation.CurrentUser,         StoreName.My,         X509FindType.FindBySubjectDistinguishedName,         "CN=Client");       var rst = new RequestSecurityToken     {         RequestType = RequestTypes.Issue,         AppliesTo = new EndpointAddress(rstsRealm),         KeyType = KeyTypes.Symmetric     };       var channel = factory.CreateChannel();     return channel.Issue(rst); } To use a token to request another token is slightly different. First the IssuedTokenWSTrustBinding is used and second the channel factory extension methods are used to send the identity provider token to the Resource STS: private static SecurityToken GetRSTSToken(SecurityToken idpToken) {     var binding = new IssuedTokenWSTrustBinding();     binding.SecurityMode = SecurityMode.TransportWithMessageCredential;       var factory = new WSTrustChannelFactory(         binding,         rstsEndpoint);     factory.TrustVersion = TrustVersion.WSTrust13;     factory.Credentials.SupportInteractive = false;       var rst = new RequestSecurityToken     {         RequestType = RequestTypes.Issue,         AppliesTo = new EndpointAddress(svcRealm),         KeyType = KeyTypes.Symmetric     };       factory.ConfigureChannelFactory();     var channel = factory.CreateChannelWithIssuedToken(idpToken);     return channel.Issue(rst); } For this particular case I chose an ADFS endpoint for issued token authentication (see part 1 for more background). Calling the service now works exactly like I described in my last post. You may now wonder if the same thing can be also achieved using configuration only – absolutely. But there are some gotchas. First of all the configuration files becomes quite complex. As we discussed in part 4, the bindings must be nested for WCF to unwind the token call-stack. But in this case svcutil cannot resolve the first hop since it cannot use metadata to inspect the identity provider. This binding must be supplied manually. The other issue is around the value for the realm/appliesTo when requesting a token for the R-STS. Using the manual approach you have full control over that parameter and you can simply use the R-STS issuer URI. Using the configuration approach, the exact address of the R-STS endpoint will be used. This means that you may have to register multiple R-STS endpoints in the identity provider. Another issue you will run into is, that ADFS does only accepts its configured issuer URI as a known realm by default. You’d have to manually add more audience URIs for the specific endpoints using the ADFS Powershell commandlets. I prefer the “manual” approach. That’s it. Hope this is useful information.

    Read the article

  • How to write reusable code in node.js

    - by lortabac
    I am trying to understand how to design node.js applications, but it seems there is something I can't grasp about asynchronous programming. Let's say my application needs to access a database. In a synchronous environment I would implement a data access class with a read() method, returning an associative array. In node.js, because code is executed asynchronously, this method can't return a value, so, after execution, it will have to "do" something as a side effect. It will then contain some code which does something else than just reading data. Let's suppose I want to call this method multiple times, each time with a different success callback. Since the callback is included in the method itself, I can't find a clean way to do this without either duplicating the method or specifying all possible callbacks in a long switch statement. What is the proper way to handle this problem? Am I approaching it the wrong way?

    Read the article

  • Compute if a function is pure

    - by Oni
    As per Wikipedia: In computer programming, a function may be described as pure if both these statements about the function hold: The function always evaluates the same result value given the same argument value(s). The function result value cannot depend on any hidden information or state that may change as program execution proceeds or between different executions of the program, nor can it depend on any external input from I/O devices. Evaluation of the result does not cause any semantically observable side effect or output, such as mutation of mutable objects or output to I/O devices. I am wondering if it is possible to write a function that compute if a function is pure or not. Example code in Javascript: function sum(a,b) { return a+b; } function say(x){ console.log(x); } isPure(sum) // True isPure(say) // False

    Read the article

  • .NET 4: &ldquo;Slim&rdquo;-style performance boost!

    - by Vitus
    RTM version of .NET 4 and Visual Studio 2010 is available, and now we can do some test with it. Parallel Extensions is one of the most valuable part of .NET 4.0. It’s a set of good tools for easily consuming multicore hardware power. And it also contains some “upgraded” sync primitives – Slim-version. For example, it include updated variant of widely known ManualResetEvent. For people, who don’t know about it: you can sync concurrency execution of some pieces of code with this sync primitive. Instance of ManualResetEvent can be in 2 states: signaled and non-signaled. Transition between it possible by Set() and Reset() methods call. Some shortly explanation: Thread 1 Thread 2 Time mre.Reset(); mre.WaitOne(); //code execution 0 //wating //code execution 1 //wating //code execution 2 //wating //code execution 3 //wating mre.Set(); 4 //code execution //… 5 Upgraded version of this primitive is ManualResetEventSlim. The idea in decreasing performance cost in case, when only 1 thread use it. Main concept in the “hybrid sync schema”, which can be done as following:   internal sealed class SimpleHybridLock : IDisposable { private Int32 m_waiters = 0; private AutoResetEvent m_waiterLock = new AutoResetEvent(false);   public void Enter() { if (Interlocked.Increment(ref m_waiters) == 1) return; m_waiterLock.WaitOne(); }   public void Leave() { if (Interlocked.Decrement(ref m_waiters) == 0) return; m_waiterLock.Set(); }   public void Dispose() { m_waiterLock.Dispose(); } } It’s a sample from Jeffry Richter’s book “CLR via C#”, 3rd edition. Primitive SimpleHybridLock have two public methods: Enter() and Leave(). You can put your concurrency-critical code between calls of these methods, and it would executed in only one thread at the moment. Code is really simple: first thread, called Enter(), increase counter. Second thread also increase counter, and suspend while m_waiterLock is not signaled. So, if we don’t have concurrent access to our lock, “heavy” methods WaitOne() and Set() will not called. It’s can give some performance bonus. ManualResetEvent use the similar idea. Of course, it have more “smart” technics inside, like a checking of recursive calls, and so on. I want to know a real difference between classic ManualResetEvent realization, and new –Slim. I wrote a simple “benchmark”: class Program { static void Main(string[] args) { ManualResetEventSlim mres = new ManualResetEventSlim(false); ManualResetEventSlim mres2 = new ManualResetEventSlim(false);   ManualResetEvent mre = new ManualResetEvent(false);   long total = 0; int COUNT = 50;   for (int i = 0; i < COUNT; i++) { mres2.Reset(); Stopwatch sw = Stopwatch.StartNew();   ThreadPool.QueueUserWorkItem((obj) => { //Method(mres, true); Method2(mre, true); mres2.Set(); }); //Method(mres, false); Method2(mre, false);   mres2.Wait(); sw.Stop();   Console.WriteLine("Pass {0}: {1} ms", i, sw.ElapsedMilliseconds); total += sw.ElapsedMilliseconds; }   Console.WriteLine(); Console.WriteLine("==============================="); Console.WriteLine("Done in average=" + total / (double)COUNT); Console.ReadLine(); }   private static void Method(ManualResetEventSlim mre, bool value) { for (int i = 0; i < 9000000; i++) { if (value) { mre.Set(); } else { mre.Reset(); } } }   private static void Method2(ManualResetEvent mre, bool value) { for (int i = 0; i < 9000000; i++) { if (value) { mre.Set(); } else { mre.Reset(); } } } } I use 2 concurrent thread (the main thread and one from thread pool) for setting and resetting ManualResetEvents, and try to run test COUNT times, and calculate average execution time. Here is the results (I get it on my dual core notebook with T7250 CPU and Windows 7 x64): ManualResetEvent ManualResetEventSlim Difference is obvious and serious – in 10 times! So, I think preferable way is using ManualResetEventSlim, because not always on calling Set() and Reset() will be called “heavy” methods for working with Windows kernel-mode objects. It’s a small and nice improvement! ;)

    Read the article

  • Design for a plugin based application

    - by Varun Naik
    I am working on application, details of which I cannot discuss here. We have core framework and the rest is designed as plug in. In the core framework we have a domain object. This domain object is updated by the plugins. I have defined an interface in which I have function as DomainObject doProcessing(DomainObject object) My intention here is I pass the domain object, the plug in will update it and return it. This updated object is then passed again to different plugin to be updated. I am not sure if this is a good approach. I don't like passing the DomainObject to plugin. Is there a better way I can achieve this? Should I just request data from plugin and update the domain object myself?

    Read the article

  • Disconnect have no effect using rdesktop

    - by Hongxu Chen
    So I'm using rdesktop with my labtop when I remote my PC in the lab,which is installed with Windows 7.Everything went well until I recently upgraded my lubuntu of the laptop(or maybe there's nothing with the upgrade at all,however I don't know).The rdesktop fails to disconnect when I disconnect from the start menu of Windows.This does not mean that I cannot return to my linux, actually I get back to lubuntu successfully and the terminal reports that I have disconnected.However when I re-login to Windows of the PC in the lab(via rdesktop) after I reboot my laptop, it fails.Then I come to the PC in the lab and the screen message tells me that it is still connected with my lubuntu. So what's the problem? Do any guys have similar experience? PC:Windows 7,in the lab;laptop:linux(lubuntu 12.04)

    Read the article

  • Is there any reason lazy initialization couldn't be built into Java?

    - by Renesis
    Since I'm working on a server with absolutely no non-persisted state for users, every User-related object we have is rolled out on every request. Consequently I often find myself doing lazy initialization of properties of objects that may go unused. protected EventDispatcher dispatcher = new EventDispatcher(); Becomes... protected EventDispatcher<EventMessage> dispatcher; public EventDispatcher<EventMessage> getEventDispatcher() { if (dispatcher == null) { dispatcher = new EventDispatcher<EventMessage>(); } return dispatcher; } Is there any reason this couldn't be built into Java? protected lazy EventDispatcher dispatcher = new EventDispatcher();

    Read the article

  • SU, SUDO and Upgrade problem

    - by DigitalDaigor
    i have a big problem with my Ubuntu PC. When i try sometigng need sudo or su, nothing is working and the OS Upgrade too, when i write the root password to the su command i have a "autentication error", same in the upgrades and when i try the sudo command to install something i don't have error message and nothing happen. I was tring to make samba work when all goes wrong. Some help!? Thanks! Edit SSH connection refused and connection closed if i try to login from secondary PC I can't access recovery mode... With the autologin i can access the system, if i logout and try to login again return the autentication error

    Read the article

  • Podcast: Oracle Introduces Oracle Communications Data Model

    - by kimberly.billings
    I recently sat down with Tony Velcich from Oracle's Communications Product Management team to learn more about the new Oracle Communications Data Model (OCDM). OCDM is a standards-based data model for Oracle Database that helps communications service providers jumpstart data warehouse and business intelligence initiatives to realize a fast return on investment by reducing implementation time and costs. Listen to the podcast. var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

  • cannot open ubuntu software center

    - by success
    I deleted some unnecessary icon themes and now my application icons are changed. I cannot open Ubuntu software center also.... the following message is displayed.... success@user-pc:~$ software-center 2012-09-12 22:24:52,048 - softwarecenter.ui.gtk3.app - INFO - setting up proxy 'None' 2012-09-12 22:24:52,055 - softwarecenter.db.database - INFO - open() database: path=None use_axi=True use_agent=True Traceback (most recent call last): File "/usr/bin/software-center", line 142, in <module> app = SoftwareCenterAppGtk3(datadir, xapian_base_path, options, args) File "/usr/share/software-center/softwarecenter/ui/gtk3/app.py", line 387, in __init__ self.datadir) File "/usr/share/software-center/softwarecenter/ui/gtk3/panes/historypane.py", line 78, in __init__ self._get_emblems(self.icons) File "/usr/share/software-center/softwarecenter/ui/gtk3/panes/historypane.py", line 192, in _get_emblems pb = icons.load_icon(emblem, self.ICON_SIZE, 0) File "/usr/lib/python2.7/dist-packages/gi/types.py", line 43, in function return info.invoke(*args, **kwargs) gi._glib.GError: Icon 'package-install' not present in theme I also tried the following code to change the icon but no didnt work.... gksu gedit /usr/share/applications/ubuntu-software-center.desktop

    Read the article

  • Patterns to refactor common code in multi-platform software

    - by L. De Leo
    I have a Django application and a PyQt application that share a lot of code. A big chunk of the PyQt application are copied verbatim from the Django application's views. As this is a game, I have already an engine.py module that I'm sharing among the two applications, but I was wondering how to restructure the middle layer (what in Django corresponds to the largest part of the views minus the return HttpResponse part) into its own component. In the web application the components are those of a classic Django application (with the only exception that I don't make any use of models): the game engine the url dispatcher the template the views My PyQt application is divided into: the game engine the UI definition where I declare the UI components and react to the events (basically this takes the place of the template and the url dispatcher in the Django app) the controller where I instantiate the window object and reproduce the methods that map the views in the Django app

    Read the article

  • Implementing synchronous MediaTypeFormatters in ASP.NET Web API

    - by cibrax
    One of main characteristics of MediaTypeFormatter’s in ASP.NET Web API is that they leverage the Task Parallel Library (TPL) for reading or writing an model into an stream. When you derive your class from the base class MediaTypeFormatter, you have to either implement the WriteToStreamAsync or ReadFromStreamAsync methods for writing or reading a model from a stream respectively. These two methods return a Task, which internally does all the serialization work, as it is illustrated bellow. public abstract class MediaTypeFormatter { public virtual Task WriteToStreamAsync(Type type, object value, Stream writeStream, HttpContent content, TransportContext transportContext); public virtual Task<object> ReadFromStreamAsync(Type type, Stream readStream, HttpContent content, IFormatterLogger formatterLogger); }   .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } However, most of the times, serialization is a safe operation that can be done synchronously. In fact, many of the serializer classes you will find in the .NET framework only provide sync methods. So the question is, how you can transform that synchronous work into a Task ?. Creating a new task using the method Task.Factory.StartNew for doing all the serialization work would be probably the typical answer. That would work, as a new task is going to be scheduled. However, that might involve some unnecessary context switches, which are out of our control and might be affect performance on server code specially.   If you take a look at the source code of the MediaTypeFormatters shipped as part of the framework, you will notice that they actually using another pattern, which uses a TaskCompletionSource class. public Task WriteToStreamAsync(Type type, object value, Stream writeStream, HttpContent content, TransportContext transportContext) {   var tsc = new TaskCompletionSource<AsyncVoid>(); tsc.SetResult(default(AsyncVoid));   //Do all the serialization work here synchronously   return tsc.Task; }   /// <summary> /// Used as the T in a "conversion" of a Task into a Task{T} /// </summary> private struct AsyncVoid { } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } They are basically doing all the serialization work synchronously and using a TaskCompletionSource for returning a task already done. To conclude this post, this is another approach you might want to consider when using serializers that are not compatible with an async model. Update: Henrik Nielsen from the ASP.NET team pointed out the existence of a built-in media type formatter for writing sync formatters. BufferedMediaTypeFormatter http://t.co/FxOfeI5x

    Read the article

  • Can't install fglrx on Kernel 3.11.0-12

    - by byf-ferdy
    I'm running Ubuntu Gnome 13.10 on a Sony Vaio laptop. I tried to install the newest fglrx driver from the cchtml.com guide but no matter which version I use (13.4 or even 13.9) the installation of the generated .debs fails with this message: Error! Bad return status for module build on kernel: 3.11.0-12-generic (x86_64) This seems to be a confirmed bug on launchpad. Currently I'm running the Radeon SI driver but I really need some 3D acceleration. What is there I can do to install any version of fglrx correctly or to speed up the bug-fixing-process? How long does the fixing of these bugs usually take?

    Read the article

  • Introduction to LinqPad Driver for StreamInsight 2.1

    - by Roman Schindlauer
    We are announcing the availability of the LinqPad driver for StreamInsight 2.1. The purpose of this blog post is to offer a quick introduction into the new features that we added to the StreamInsight LinqPad driver. We’ll show you how to connect to a remote server, how to inspect the entities present of that server, how to compose on top of them and how to manage their lifetime. Installing the driver Info on how to install the driver can be found in an earlier blog post here. Establishing connections As you click on the “Add Connection” link in the left pane you will notice that now it’s possible to build the data context automatically. The new driver appears as an option in the upper list, and if you pick it you will open a connection dialog that lets you connect to a remote StreamInsight server. The connection dialog lets you specify the address of the remote server. You will notice that it’s possible to pick up the binding information from the configuration file of the LinqPad application (which is normally in the same folder as LinqPad.exe and is called LinqPad.exe.config). In order for the context to be generated you need to pick an application from the server. The control is editable hence you can create a new application if you don’t want to make changes to an existing application. If you choose a new application name you will be prompted for confirmation before this gets created. Once you click OK the connection is created and you can start issuing queries against the remote server. If there’s any connectivity error the connection is marked with a red X and you can see the error message informing you what went wrong (i.e., the remote server could not be reached etc.). The context for remote servers Let’s take a look at what happens after we are connected successfully. Every LinqPad query runs inside a context – think of it as a class that wraps all the code that you’re writing. If you’re connecting to a live server the context will contain the following: The application object itself. All entities present in this application (sources, sinks, subjects and processes). The picture below shows a snapshot of the left pane of LinqPad after a successful connection. Every entity on the server has a different icon which will allow users to figure out its purpose. You will also notice that some entities have a string in parentheses following the name. It should be interpreted as such: the first name is the name of the property of the context class and the second name is the name of the entity as it exists on the server. Not all valid entity names are valid identifier names so in cases where we had to make a transformation you see both. Note also that as you hover over the entities you get IntelliSense with their types – more on that later. Remoting is not supported As you play with the entities exposed by the context you will notice that you can’t read and write directly to/from them. If for instance you’re trying to dump the content of an entity you will get an error message telling you that in the current version remoting is not supported. This is because the entity lives on the remote server and dumping its content means reading the events produced by this entity into the local process. ObservableSource.Dump(); Will yield the following error: Reading from a remote 'System.Reactive.Linq.IQbservable`1[System.Int32]' is not supported. Use the 'Microsoft.ComplexEventProcessing.Linq.RemoteProvider.Bind' method to read from the source using a remote observer. This basically tells you that you can call the Bind() method to direct the output of this source to a sink that has to be defined on the remote machine as well. You can’t bring the results to the LinqPad window unless you write code specifically for that. Compose queries You may ask – what's the purpose of all that? After all the same information is present in the EventFlowDebugger, why bother with showing it in LinqPad? First of all, What gets exposed in LinqPad is not what you see in the debugger. In LinqPad we have a property on the context class for every entity that lives on the server. Because LinqPad offers IntelliSense we in fact have much more information about the entity, and more importantly we can compose with that entity very easily. For example, let’s say that this code creates an entity: using (var server = Server.Connect(...)) {     var a = server.CreateApplication("WhiteFish");     var src = a         .DefineObservable<int>(() => Observable.Range(0, 3))         .Deploy("ObservableSource"); If later we want to compose with the source we have to fetch it and then we can bind something to     a.GetObservable<int>("ObservableSource)").Bind(... This means that we had to know a bunch of things about this: that it’s a source, that it’s an observable, it produces a result with payload Int32 and it’s named “ObservableSource”. Only the second and last bits of information are present in the debugger, by the way. As you type in the query window you see that all the entities are present, you get IntelliSense support for them and it’s much easier to make sense of what’s available. Let’s look at a scenario where composition is plausible. With the new programming model it’s possible to create “cold” sources that are parameterized. There was a way to accomplish that even in the previous version by passing parameters to the adapters, but this time it’s much more elegant because the expression declares what parameters are required. Say that we hover the mouse over the ThrottledSource source – we will see that its type is Func<int, int, IQbservable<int>> - this in effect means that we need to pass two int parameters before we can get a source that produces events, and the type for those events is int – in the particular case of my example I had the source produce a range of integers and the two parameters were the start and end of the range. So we see how a developer can create a source that is not running yet. Then someone else (e.g. an administrator) can pass whatever parameters appropriate and run the process. Proxy Types Here’s an interesting scenario – what if someone created a source on a server but they forgot to tell you what type they used. Worse yet, they might have used an anonymous type and even though they can refer to it by name you can’t figure out how to use that type. Let’s walk through an example that shows how you can compose against types you don’t need to have the definition of. This is how we can create a source that returns an anonymous type: Application.DefineObservable(() => Observable.Range(1, 10).Select(i => new { I = i })).Deploy("O1"); Now if we refresh the connection we can see the new source named O1 appear in the list. But what’s more important is that we now have a type to work with. So we can compose a query that refers to the anonymous type. var threshold = new StreamInsightDynamicDriver.TypeProxies.AnonymousType1_0<int>(5); var filter = from i in O1              where i > threshold              select i; filter.Deploy("O2"); You will notice that the anonymous type defined with this statement: new { I = i } can now be manipulated by a client that does not have access to it because the LinqPad driver has generated another type in its stead, named StreamInsightDynamicDriver.TypeProxies.AnonymousType1_0. This type has all the properties and fields of the type defined on the server, except in this case we can instantiate values and use it to compose more queries. It is worth noting that the same thing works for types that are not anonymous – the test is if the LinqPad driver can resolve the type or not. If it’s not possible then a new type will be generated that approximates the type that exists on the server. Control metadata In addition to composing processes on top of the existing entities we can do other useful things. We can delete them – nothing new here as we simply access the entities through the Entities collection of the application class. Here is where having their real name in parentheses comes handy. There’s another way to find out what’s behind a property – dump its expression. The first line in the output tells us what’s the name of the entity used to build this property in the context. Runtime information So let’s create a process to see what happens. We can bind a source to a sink and run the resulting process. If you right click on the connection you can refresh it and see the process present in the list of entities. Then you can drag the process to the query window and see that you can have access to process object in the Processes collection of the application. You can then manipulate the process (delete it, read its diagnostic view etc.). Regards, The StreamInsight Team

    Read the article

  • Geometry Shader input vertices order

    - by NPS
    MSDN specifies (link) that when using triangleadj type of input to the GS, it should provide me with 6 vertices in specific order: 1st vertex of the triangle processed, vertex of an adjacent triangle, 2nd vertex of the triangle processed, another vertex of an adjacent triangle and so on... So if I wanted to create a pass-through shader (i.e. output the same triangle I got on input and nothing else) I should return vertices 0, 2 and 4. Is that correct? Well, apparently it isn't because I did just that and when I ran my app the vertices were flickering (like changing positions/disappearing/showing again or sth like that). But when I instead output vertices 0, 1 and 2 the app rendered the mesh correctly. I could provide some code but it seems like the problem is in the input vertices order, not the code itself. So what order do input vertices to the GS come in?

    Read the article

  • gcc no longer works after up grade to latest Ubuntu

    - by Hugh S. Myers
    As an example: hsmyers@ubuntu:~/c_dev$ cat hello.c #include <stdio.h> int main(int argc,char **argv) { printf("Hello World!\n"); return 0; } hsmyers@ubuntu:~/c_dev$ gcc -c -o hello.o hello.c In file included from /usr/include/stdio.h:28:0, from hello.c:1: /usr/include/features.h:323:26: fatal error: bits/predefs.h: No such file or directory compilation terminated. At a guess somewhere along the way after trying to fix the error message: /usr/bin/ld: cannot find crt1.o: No such file or directory I've munged things up completely. Could anyone please advise? --hsm

    Read the article

< Previous Page | 593 594 595 596 597 598 599 600 601 602 603 604  | Next Page >