Search Results

Search found 7294 results on 292 pages for 'parameters'.

Page 119/292 | < Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >

  • HybridGraphics vga_switcheroo switch to intel at boot

    - by Jan
    How can I do that? I have followed this https://help.ubuntu.com/community/HybridGraphics but it didn't work I'm getting the error at boot that the /sys/kernel/debug/vgaswitcheroo/switch don't exist , so the problem seems to be that the /sys/kernel/debug don't exist when the grub parameters are processed and it is created after the grub parameter processing. So far I have made an alternative method, which works. In rc.local I granted full access to my user to /sys/kernel/debug then also to /sys/kernel/debug/vgaswitcheroo/switch and then I have made a simple script to switch the graphics to intel and putted it in ~./confif/autostart. The script is executed every time I log in to gnome. It's working however it would be nice if it worked at boot, as is described at that help.ubuntu.com page. Any ideas? Thanks

    Read the article

  • Scala factory pattern returns unusable abstract type

    - by GGGforce
    Please let me know how to make the following bit of code work as intended. The problem is that the Scala compiler doesn't understand that my factory is returning a concrete class, so my object can't be used later. Can TypeTags or type parameters help? Or do I need to refactor the code some other way? I'm (obviously) new to Scala. trait Animal trait DomesticatedAnimal extends Animal trait Pet extends DomesticatedAnimal {var name: String = _} class Wolf extends Animal class Cow extends DomesticatedAnimal class Dog extends Pet object Animal { def apply(aType: String) = { aType match { case "wolf" => new Wolf case "cow" => new Cow case "dog" => new Dog } } } def name(a: Pet, name: String) { a.name = name println(a +"'s name is: " + a.name) } val d = Animal("dog") name(d, "fred") The last line of code fails because the compiler thinks d is an Animal, not a Dog.

    Read the article

  • The perils of double-dash comments [T-SQL]

    - by jamiet
    I was checking my Twitter feed on my way in to work this morning and was alerted to an interesting blog post by Valentino Vranken that highlights a problem regarding the OLE DB Source in SSIS. In short, using double-dash comments in SQL statements within the OLE DB Source can cause unexpected results. It really is quite an important read if you’re developing SSIS packages so head over to SSIS OLE DB Source, Parameters And Comments: A Dangerous Mix! and be educated. Note that the problem is solved in SSIS2012 and Valentino explains exactly why. If reading Valentino’s post has switched your brain into “learn mode” perhaps also check out my post SSIS: SELECT *... or select from a dropdown in an OLE DB Source component? which highlights another issue to be aware of when using the OLE DB Source. As I was reading Valentino’s post I was reminded of a slidedeck by Chris Adkin entitled T-SQL Coding Guidelines where he recommends never using double-dash comments: That’s good advice! @Jamiet

    Read the article

  • Determine percentage of screen covered by an object without using frustum culling

    - by Meltac
    On the CPU-side of an 3D first-person / ego perspective game I need to check whether what the players currently sees on screen is the inside of a box object defined by world space coordinates (the player might be outside of that box but on screen sees only/mostly the inside of the box, or vice-versa, looks from within the box to the outside). The "casual" way of performing such a check would incorporate frustum culling but such an approach would be hard to achieve with my given set of engine parameters which I'd like to avoid if there is a simpler way. What I actually have at the point where I would like to do the check (high-level script on CPU, not GPU side): Camera world position Camera direction Camera FOV Two Box corner world coordinates (left-bottom-front, right-top-back) What I do not have right away: View frustrum definition (near/far plane or say 6 planes defining frustum) Any specific pixel information (uv, view space position, depth or the like) What I would like to calculate: Percentage of screen "covered" by box. Any hints on how to perform such calculation?

    Read the article

  • How should I make searching a relational database more efficient?

    - by Travis J
    This is in the scope of a web application. I have a database which has a few nested relations. There is a feature which depicts the history of a large chain of relations. It is essentially a data analysis feature. The issue is that in order to search, a large object graph must be loaded - the loading time for this object graph is not quick enough to be viable. The problem is that without loading the whole graph it makes searching from a single string nearly impossible. In order to search, explicit fields must be specified and the search data supplied. Is there a design pattern for exposing the data in a way which facilitates a single string search instead of having to explicitly define parameters?

    Read the article

  • How do functional languages handle random numbers?

    - by Electric Coffee
    What I mean about that is that in nearly every tutorial I've read about functional languages, is that one of the great things about functions, is that if you call a function with the same parameters twice, you'll always end up with the same result. How on earth do you then make a function that takes a seed as a parameter, and then returns a random number based on that seed? I mean this would seem to go against one of the things that are so good about functions, right? Or am I completely missing something here?

    Read the article

  • Are there well-known PowerShell coding conventions?

    - by Tahir Hassan
    Are there any well-defined conventions when programming in PowerShell? For example, in scripts which are to be maintained long-term, do we need to: Use the real cmdlet name or alias? Specify the cmdlet parameter name in full or only partially (dir -Recurse versus dir -r) When specifying string arguments for cmdlets do you enclose them in quotes (New-Object 'System.Int32' versus New-Object System.Int32 When writing functions and filters do you specify the types of parameters? Do you write cmdlets in the (official) correct case? For keywords like BEGIN...PROCESS...END do you write them in uppercase only? It seems that MSDN lack coding conventions document for PowerShell, while such document exist for example for C#.

    Read the article

  • Problem in Addon Domain name binding with existing directory in my webspace

    - by articlestack
    I recently purchased a domain thinkzarahatke.com. At the time of purchasing i selected an option, say url redirect, to my existing site. Further I had entered Naming server detail. But I dint find any other option to change URL rewrite parameters under Domain Name Manager. From the cPanel of my hosting space, I had added new domain as addon and set a sundirectory for it. Now If I am trying to access my new domain directly, it is automatically redirecting to my old website. While if i access some inside directory, like thinkzarahatke.com/blog, then i am able to access to my new site. What wrong i done? Please tell me if i need to do some more settings with addon domain.

    Read the article

  • AJAX Requests & Client-Side JavaScript

    - by Sarah24
    I am new to AJAX and trying to understand a question I've been given: A HTTP request is generated by a form which contains some drop-down list. When the form is submitted, a new web page is displayed with some relevant text information (which is dependent on the list item selected). Now, the same parameters are sent to the server via an AJAX request, and the same text information is returned. Q. What tasks would the client-side JavaScript have to do to ensure a valid request was constructed and sent? Any useful links or quick explanations greatly appreciated.

    Read the article

  • Removing existing filtered pages from Google's index: noindex / 301 / canonical to non-filtered page?

    - by Noam
    I've decided to remove some of my site's pages from the Google index to focus more of the indexed pages on higher quality pages. The pages I'm going to remove are already in the index. These removed pages are filtered pages which will continue to exist, I just don't want them in the google index because they add little quality to the same page without any filter selected. I've added in webmaster tools specification of narrow for the parameters that set these filters, but it doesn't seem this changes anything in how he handles these pages. So I'm considering three options: Adding <meta name="robots" content="noindex" /> to the html header of these filtered pages 301 to the non-filtered page that contains the most similar information and will remain in the index Canonical tag. Which I'm not sure is exactly the mainstream use case, as these aren't really the same pages. Which should I use?

    Read the article

  • What's special about currying or partial application?

    - by Vigneshwaran
    I've been reading articles on Functional programming everyday and been trying to apply some practices as much as possible. But I don't understand what is unique in currying or partial application. Take this Groovy code as an example: def mul = { a, b -> a * b } def tripler1 = mul.curry(3) def tripler2 = { mul(3, it) } I do not understand what is the difference between tripler1 and tripler2. Aren't they both the same? The 'currying' is supported in pure or partial functional languages like Groovy, Scala, Haskell etc. But I can do the same thing (left-curry, right-curry, n-curry or partial application) by simply creating another named or anonymous function or closure that will forward the parameters to the original function (like tripler2) in most languages (even C.) Am I missing something here? There are places where I can use currying and partial application in my Grails application but I am hesitating to do so because I'm asking myself "How's that different?" Please enlighten me.

    Read the article

  • build command by concatenating string in bash

    - by Lennart Rolland
    I have a bash script that builds a command-line in a string based on some parameters before executing it in one go. The parts that are concatenated to the command string are supposed to be separated by pipes to facilitate a "streaming" of data through each component. A very simplified example: #!/bin/bash part1=gzip -c part2=some_other_command cmd="cat infile" if [ ! "$part1" = "" ] then cmd+=" | $part1" fi if [ ! "$part2" = "" ] then cmd+=" | $part2" fi cmd+="> outfile" #show command. It looks ok echo $cmd #run the command. fails with pipes $cmd For some reason the pipes don't seem to work. When I run this script i get different error messages relating usually to the first part of the command (before the first pipe). So my question is whether or not it is possible to build a command in this way, and what is the best way to do it?

    Read the article

  • Rule Engine in .net

    - by user641812
    I have to import data from excel to SQL database. Excel data contains various parameters and there value like P1,P1,P4,P5 etc. I have to apply business rules Like if( P1 100 and P1 < 200) then insert the record in database. Similarly in some cases string values are also validated. Can I have any open source rule engine that contains UI to change , add , delete the rules. Am using C# to read the excel and and insert the records One more thing which is best approach: Read excel first and store every record as an object in a collection, then iterate through the collection, apply business rules on every object and then insert record in the database Or Read one record from excel apply business rule and then insert record in the database. Repeat the process for whole excel.

    Read the article

  • Simple script to get referenced table and their column names

    - by Peter Larsson
    -- Setup user supplied parameters DECLARE @WantedTable SYSNAME   SET     @WantedTable = 'Sales.factSalesDetail'   -- Wanted table is "parent table" SELECT      PARSENAME(@WantedTable, 2) AS ParentSchemaName,             PARSENAME(@WantedTable, 1) AS ParentTableName,             cp.Name AS ParentColumnName,             OBJECT_SCHEMA_NAME(parent_object_id) AS ChildSchemaName,             OBJECT_NAME(parent_object_id) AS ChildTableName,             cc.Name AS ChildColumnName FROM        sys.foreign_key_columns AS fkc INNER JOIN  sys.columns AS cc ON cc.column_id = fkc.parent_column_id                 AND cc.object_id = fkc.parent_object_id INNER JOIN  sys.columns AS cp ON cp.column_id = fkc.referenced_column_id                 AND cp.object_id = fkc.referenced_object_id WHERE       referenced_object_id = OBJECT_ID(@WantedTable)   -- Wanted table is "child table" SELECT      OBJECT_SCHEMA_NAME(referenced_object_id) AS ParentSchemaName,             OBJECT_NAME(referenced_object_id) AS ParentTableName,             cc.Name AS ParentColumnName,             PARSENAME(@WantedTable, 2) AS ChildSchemaName,             PARSENAME(@WantedTable, 1) AS ChildTableName,             cp.Name AS ChildColumnName FROM        sys.foreign_key_columns AS fkc INNER JOIN  sys.columns AS cp ON cp.column_id = fkc.parent_column_id                 AND cp.object_id = fkc.parent_object_id INNER JOIN  sys.columns AS cc ON cc.column_id = fkc.referenced_column_id                 AND cc.object_id = fkc.referenced_object_id WHERE       parent_object_id = OBJECT_ID(@WantedTable)

    Read the article

  • Historical origins of Scala implicits

    - by Frank
    Scala has been called complex with its rich feature set by many of my colleagues and some even blamed all those new features of it. While most programmers are aware of the OO-features, and at least the decent ones also know about functional programming, there is one feature in particular in Scala for which I am not aware of its historical origins. Given that a major mantra of our profession is to not reinvent the wheel, I am rather confident, that Scala does not have any actual unheard-of-before features, but I stand to be corrected on this one if necessary. To get to the actual question, while I am aware of the origins of most of Scala's features I have never seen something like its implicit declarations before. Are there other (older!) languages out there which also provide this feature? Does it make sense to distinguish the different cases of implicits (as they may originate from different sources), namely implict conversions and implicit parameters?

    Read the article

  • Why we do not use PowerConnect to access PeopleSoft Tree

    - by dylan.wan
    1. It does not allow you to use parameters to the PeopleSoft connect. It may be changed later. However, it was a big issue when we try to address customer issues. 2. It requires EFFDT as an option. It expect that people change the EFFDT using Mapping Editor. How can a business user does that every month? 3. It asks for a Tree Name. Many PeopleSoft tree structure supports multiple trees. Tree is just a header of the hierarchy. Whenever you add a new Tree, you need to create a new mapping!! It does not make sense to use PowerConnect due to the customer demands. All requirements are from customers. We have no choice but stop using it.

    Read the article

  • BizTalk Server 2010 Beta available

    - by Rajesh Charagandla
    BizTalk Server 2010 Beta - Click Here to Download Overview: BizTalk Server 2010 offers significant enhancements to help integrate heterogeneous Line-of-business systems with Windows .NET and SharePoint based applications to optimize user productivity, gain business efficiency and increase agility . BizTalk Server 2010 allow .Net developers to take advantage of BizTalk services right out of the box to rapidly build solutions that need to integrate transactions and data from applications like SAP, Mainframes, MS Dynamics and Oracle. Similarly SharePoint developers can seamlessly use BizTalk services directly through the new Business Connectivity Services in SharePoint 2010. BizTalk Server 2010 includes new data mapping & transformation tool to dramatically reduce the development time to mediate data exchange between disparate systems. It also provide a new single dashboard to manage performance parameters and streamline deployments from development to test to production. BizTalk 2010 includes new, scalable Trading Partner Management (TPM) model with a graphical interface for flexible management of business partner relationships and efficient on-boarding process.

    Read the article

  • documentation of typescript code

    - by Max Beikirch
    my question is rather short: How do I document typescript code properly? I found out that for projects becoming bigger and bigger, it is important to look at a function and immediately know parameters, what it returns and side-effects etc. It is tiring to have just a bunch of comments before a function, most of the time these 'blocks' even look differently in style. What I am looking for is a documentation tool like javadoc or doxygen for typescript. Is there anything out there? Or is it possible to 'abuse' a documentation tool and get it to work with typescript?

    Read the article

  • Implement Tree/Details With Taskflow Regions Using EJB

    - by Deepak Siddappa
    This article describes on Display Tree/Details using taskflow regions.Use Case DescriptionLet us take scenario where we need to display Tree/Details, left region contains category hierarchy with items listed in a tree structure (ex:- Region-Countries-Locations-Departments in tree format) and right region contains the Employees list.In detail, Here User may drills down through categories using a tree until Employees are listed. Clicking the tree node name displays Employee list in the adjacent pane related to particular tree node. Implementation StepsThe script for creating the tables and inserting the data required for this application CreateSchema.sql Lets create a Java EE Web Application with Entities based on Regions, Countries, Locations, Departments and Employees table. Create a Stateless Session Bean and data control for the Stateless Session Bean. Add the below code to the session bean and expose the method in local/remote interface and generate a data control for that.Note:- Here in the below code "em" is a EntityManager. public List<Employees> empFilteredByTreeNode(String treeNodeType, String paramValue) { String queryString = null; try { if (treeNodeType == "null") { queryString = "select * from Employees emp ORDER BY emp.employee_id ASC"; } else if (Pattern.matches("[a-zA-Z]+[_]+[a-zA-Z]+[_]+[[0-9]+]+", treeNodeType)) { queryString = "select * from employees emp INNER JOIN departments dept\n" + "ON emp.department_id = dept.department_id JOIN locations loc\n" + "ON dept.location_id = loc.location_id JOIN countries cont\n" + "ON loc.country_id = cont.country_id JOIN regions reg\n" + "ON cont.region_id = reg.region_id and reg.region_name = '" + paramValue + "' ORDER BY emp.employee_id ASC"; } else if (treeNodeType.contains("regionsFindAll_bc_countriesList_1")) { queryString = "select * from employees emp INNER JOIN departments dept \n" + "ON emp.department_id = dept.department_id JOIN locations loc \n" + "ON dept.location_id = loc.location_id JOIN countries cont \n" + "ON loc.country_id = cont.country_id and cont.country_name = '" + paramValue + "' ORDER BY emp.employee_id ASC"; } else if (treeNodeType.contains("regionsFindAll_bc_locationsList_1")) { queryString = "select * from employees emp INNER JOIN departments dept ON emp.department_id = dept.department_id JOIN locations loc ON dept.location_id = loc.location_id and loc.city = '" + paramValue + "' ORDER BY emp.employee_id ASC"; } else if (treeNodeType.trim().contains("regionsFindAll_bc_departmentsList_1")) { queryString = "select * from Employees emp INNER JOIN Departments dept ON emp.DEPARTMENT_ID = dept.DEPARTMENT_ID and dept.DEPARTMENT_NAME = '" + paramValue + "'"; } } catch (NullPointerException e) { System.out.println(e.getMessage()); } return em.createNativeQuery(queryString, Employees.class).getResultList(); } In the ViewController project, create two ADF taskflow with page Fragments and name them as FirstTaskflow and SecondTaskflow respectively. Open FirstTaskflow,from component palette drop view(Page Fragment) name it as TreeList.jsff. Open SeconfTaskflow, from component palette drop view(Page Fragment) name it as EmpList.jsff and create two paramters in its overview parameters tab as shown in below image. Open TreeList.jsff , from data control palette drop regionsFindAll->Tree as ADF Tree. In Edit Tree Binding dialog, for Tree Level Rules select the display attributes as follows:-model.Regions - regionNamemodel.Countries - countryNamemodel.Locations - citymodel.Departments - departmentName In structure panel, click on af:Tree - t1 and select selectionListener with edit property. Create a "TreeBean" managed bean with scope as "session" as shown in below Image. Create new method as getTreeNodeSelectedValue and click ok. Open TreeBean managed bean and add the below code: private String treeNodeType; private String paramValue; public void getTreeNodeSelectedValue(SelectionEvent selectionEvent) { RichTree tree = (RichTree)selectionEvent.getSource(); RowKeySet addedSet = selectionEvent.getAddedSet(); Iterator i = addedSet.iterator(); TreeModel model = (TreeModel)tree.getValue(); model.setRowKey(i.next()); JUCtrlHierNodeBinding node = (JUCtrlHierNodeBinding)tree.getRowData(); //oracle.jbo.Row Row rw = node.getRow(); Object selectedTreeNode = node.getAttribute(0); Object treeListType = node.getBindings(); String treeNodeType = treeListType.toString(); this.setParamValue(selectedTreeNode.toString()); this.setTreeNodeType(treeNodeType); } public void setTreeNodeType(String treeNodeType) { this.treeNodeType = treeNodeType; } public String getTreeNodeType() { return treeNodeType; } public void setParamValue(String paramValue) { this.paramValue = paramValue; } public String getParamValue() { return paramValue; }<br /> Open EmpList.jsff , from data control palette drop empFilteredByTreeNode->Employees->Table as ADF Read-only Table. After selecting the  Employees result set, in Edit Action Binding dialog window pass the pageFlowScope parameters as shown in below Image. In empList.jsff page, click Binding tab and click on Create Executable binding and select Invoke action and follow as shown in below image. Edit executeEmpFiltered invoke action properties and set the Refresh to ifNeeded, So when ever the page needs the method will be executed. Create Main.jspx page with page template as Oracle Three Column Layout. Drop FirstTaskflow as Region in start facet and drop SecondTaskflow as Region in center facet, Edit task Flow Binding dialog window pass the Input Paramters as shown in below Image. Run the Main.jspx, tree will be displayed in left region and emp details will displyaed on the right region. Click on the Americas in tree node, all emp related to the Americas related will be displayed. Click on Americas->United States of America->South San Francisco->Accounting, only employee belongs to the Accounting department will be displayed.

    Read the article

  • First languages with generic programming support

    - by oluies
    Which was the first language with generic programming support, and what was the first major staticly typed language (widely used) with generics support. Generics implement the concept of parameterized types to allow for multiple types. The term generic means "pertaining to or appropriate to large groups of classes." I have seen the following mentions of "first": First-order parametric polymorphism is now a standard element of statically typed programming languages. Starting with System F [20,42] and functional programming lan- guages, the constructs have found their way into mainstream languages such as Java and C#. In these languages, first-order parametric polymorphism is usually called generics. From "Generics of a Higher Kind", Adriaan Moors, Frank Piessens, and Martin Odersky Generic programming is a style of computer programming in which algorithms are written in terms of to-be-specified-later types that are then instantiated when needed for specific types provided as parameters. This approach, pioneered by Ada in 1983 From Wikipedia Generic Programming

    Read the article

  • Converting LINQ to Twitter to Twitter API v1.1

    - by Joe Mayo
    Twitter recently updated their API to v1.1 (Current status: API v1.1). Naturally, LINQ to Twitter  needed to be updated too. This blog post outlines the changes made to LINQ to Twitter during this conversion and highlights important features that LINQ to Twitter developers will want to know. Overall Impact Generally speaking, Twitter API v1.1 is semantically very much the same as it’s predecessor. The base URL changed and so did a few resource segments, but the resources themselves are still intact. The good news is that LINQ to Twitter has always shielded the developer from this plumbing, so the entities, types, and filters didn’t change much at all.  The following sections describe what did  change. Authentication In Twitter API v1.0 authentication was not required for some resources, such as user timelines and search. However, that’s all changed because *all* queries must be authenticated in Twitter API v1.1. LINQ to Twitter has various types of authorizers you can use, supporting whatever OAuth options are available via Twitter.  You can see the LINQ to Twitter documentation, Securing Your Applications, for more info on OAuth support. The New Search One of the larger changes to the API was Search. To be more specific, the Search entity now contains a List<Status>, named Statuses, to hold results.  Additionally, any meta-data associated with the search is now in a property named SearchMetaData. The change to the Search entity and responses is the big change, but the good news is that your Search query syntax doesn’t change. Different Rate Limits The issue of rate limits itself is contentious, but this discussion is focused on the coding experience and I’ll leave the politics to those who prefer to engage in that activity. What’s important here is that both headers and resources have changed. You should review Twitter’s Rate Limit documentation to understand what the changes mean.  A quick explanation is that rate limits are applied individually to each resource in 15 minute time intervals. In LINQ to Twitter these changes surface on the Help entity, via HelpType.RateLimits. The RateLimits query has a Resources filter where you can specify a comma-separated list of categories to return rate limit info for.  The results materialize in the RateLimits dictionary, keyed on category. The Help entity also has a RateLimitsAuthorizationContext, holding the Access Token for the user performing queries – and to whom the rate limits apply. In addition to the new RateLimits query, there are new RateLimit headers that appear in the query response, whose HTTP header name is of the form X-Rate-Limit… which is different from the previous header name. LINQ to Twitter surfaces these headers via the existing properties of the TwitterContext instance. For anyone who retrieved rate limit information via the Headers property of TwitterContext, you should be aware of the new header names.  I haven’t done anything with Feature rate limit properties yet, but they appear to no longer be available – this will require more follow-up. Error Handling Twitter API v1.1 has a new format for Error Codes & Responses. LINQ to Twitter wraps these messages in the TwitterQueryException, which has been updated appropriately. The Message property of TwitterQueryException now reflects the Twitter error message, when available. There’s also a new ErrorCode that’s populated with the message error code. Parameters Most parameters stayed the same, but one of interest is Include Entities (different from LINQ to Twitter data object entities). Entities are metadata hanging off tweets, that provide start/end position in the tweet and other information for mentions, urls, hash tags, and media. Entities used to not be included unless you specified you wanted them. Now, in v1.1, entities are included by default for all APIs that return a Status.  If you were always setting IncludeEntities to true, then you won’t see a change. However, be aware that you’ll now be receiving additional data in your response from Twitter, which will explain a sudden increase in bandwidth utilization. This might or might not  matter to you  depending on the requirements of your application, but you should be aware of it. Everything Else There might be small changes here and there that I haven’t mentioned, but these were the ones you should be most aware of.  Streams didn’t change, but Twitter will be deprecating username/password authentication on public streams, in favor of OAuth, so you’ll be seeing me make that change some time in the future.  Also, Twitter will continue to evolve the API and you can expect that LINQ to Twitter will change accordingly. Summary The big changes to Twitter API were Authentication, Search, Rate Limits, and Error Handling. All API calls must be authenticated. You’ll need to change your code to read Search results differently, but the query is much the same as you use now. There’s a new RateLimits API, one of the Help queries.  Also, the new error messages are integrated into TwitterQueryException. Besides these changes, I expect  most others to be small or affect a smaller percentage of developers.  You can get the latest version of LINQ to Twitter from NuGet or visit the LINQ to Twitter download page at CodePlex.com.   @JoeMayo

    Read the article

  • SQL2K8R2: StreamInsight changes at RTM: Hopping Windows

    - by Greg Low
    We've been working on updating our demos and samples for the RTM changes of StreamInsight. I'll detail these as I come across them. The first is that there is a change to the HoppingWindow. The first two parameters are the same in the constructor but the third parameter is now required. It is the HoppingWindowOutputPolicy. Currently, there is only a single option for this which is ClipToWindowEnd. So you can create a HoppingWindow like this: var queryOutput = from w in input.HoppingWindow ( TimeSpan...(read more)

    Read the article

  • SQL ADO.NET shortcut extensions (old school!)

    - by Jeff
    As much as I love me some ORM's (I've used LINQ to SQL quite a bit, and for the MSDN/TechNet Profile and Forums we're using NHibernate more and more), there are times when it's appropriate, and in some ways more simple, to just throw up so old school ADO.NET connections, commands, readers and such. It still feels like a pain though to new up all the stuff, make sure it's closed, blah blah blah. It's pretty much the least favorite task of writing data access code. To minimize the pain, I have a set of extension methods that I like to use that drastically reduce the code you have to write. Here they are... public static void Using(this SqlConnection connection, Action<SqlConnection> action) {     connection.Open();     action(connection);     connection.Close(); } public static SqlCommand Command(this SqlConnection connection, string sql){    var command = new SqlCommand(sql, connection);    return command;}public static SqlCommand AddParameter(this SqlCommand command, string parameterName, object value){    command.Parameters.AddWithValue(parameterName, value);    return command;}public static object ExecuteAndReturnIdentity(this SqlCommand command){    if (command.Connection == null)        throw new Exception("SqlCommand has no connection.");    command.ExecuteNonQuery();    command.Parameters.Clear();    command.CommandText = "SELECT @@IDENTITY";    var result = command.ExecuteScalar();    return result;}public static SqlDataReader ReadOne(this SqlDataReader reader, Action<SqlDataReader> action){    if (reader.Read())        action(reader);    reader.Close();    return reader;}public static SqlDataReader ReadAll(this SqlDataReader reader, Action<SqlDataReader> action){    while (reader.Read())        action(reader);    reader.Close();    return reader;} It has been awhile since I've really revisited these, so you will likely find opportunity for further optimization. The bottom line here is that you can chain together a bunch of these methods to make a much more concise database call, in terms of the code on your screen, anyway. Here are some examples: public Dictionary<string, string> Get(){    var dictionary = new Dictionary<string, string>();    _sqlHelper.GetConnection().Using(connection =>        connection.Command("SELECT Setting, [Value] FROM Settings")            .ExecuteReader()            .ReadAll(r => dictionary.Add(r.GetString(0), r.GetString(1))));    return dictionary;} or... public void ChangeName(User user, string newName){    _sqlHelper.GetConnection().Using(connection =>         connection.Command("UPDATE Users SET Name = @Name WHERE UserID = @UserID")            .AddParameter("@Name", newName)            .AddParameter("@UserID", user.UserID)            .ExecuteNonQuery());} The _sqlHelper.GetConnection() is just some other code that gets a connection object for you. You might have an even cleaner way to take that step out entirely. This looks more fluent, and the real magic sauce for me is the reader bits where you can put any kind of arbitrary method in there to iterate over the results.

    Read the article

  • Animation file format

    - by Paul
    I'm trying to make a simple 2D animation file format. It'll be very rudimentary: only an XML file containing some parameters (such as frame duration) and metadata, and some images, each representing a frame. I'd like to have the whole animation (frames and XML document) packed in a single file. How do you suggest I do that? What libraries are there that would allow easy access to the files inside the animation file itself? The language I'm using is C++ and the platform is Windows, but I'd rather not use a platform dependent library, if possible.

    Read the article

  • Long delays in Unity3D substance generation

    - by Josh Buhler
    Currently working on an iOS/Android project in Unity3d, and we're seeing some incredibly long times for generating substances between testing runs. We can run the game, but once we shut down the playback, Unity begins to re-import all off the substances built using Substance Designer. As we've got a lot of these in our game, it's starting to lead to 5 minute delays between testing runs just to test a small change. Any suggestions or parameters we should check that could possibly prevent Unity from needing to regenerate these substances every time? Shouldn't it be caching these things somewhere?

    Read the article

< Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >