Search Results

Search found 40744 results on 1630 pages for 'sql interview questions a'.

Page 908/1630 | < Previous Page | 904 905 906 907 908 909 910 911 912 913 914 915  | Next Page >

  • Is there a canonical source supporting "all-surrogates"?

    - by user61852
    Background The "all-PK-must-be-surrogates" approach is not present in Codd's Relational Model or any SQL Standard (ANSI, ISO or other). Canonical books seems to elude this restrictions too. Oracle's own data dictionary scheme uses natural keys in some tables and surrogate keys in other tables. I mention this because these people must know a thing or two about RDBMS design. PPDM (Professional Petroleum Data Management Association) recommend the same canonical books do: Use surrogate keys as primary keys when: There are no natural or business keys Natural or business keys are bad ( change often ) The value of natural or business key is not known at the time of inserting record Multicolumn natural keys ( usually several FK ) exceed three columns, which makes joins too verbose. Also I have not found canonical source that says natural keys need to be immutable. All I find is that they need to be very estable, i.e need to be changed only in very rare ocassions, if ever. I mention PPDM because these people must know a thing or two about RDBMS design too. The origins of the "all-surrogates" approach seems to come from recommendations from some ORM frameworks. It's true that the approach allows for rapid database modeling by not having to do much business analysis, but at the expense of maintainability and readability of the SQL code. Much prevision is made for something that may or may not happen in the future ( the natural PK changed so we will have to use the RDBMS cascade update funtionality ) at the expense of day-to-day task like having to join more tables in every query and having to write code for importing data between databases, an otherwise very strightfoward procedure (due to the need to avoid PK colisions and having to create stage/equivalence tables beforehand ). Other argument is that indexes based on integers are faster, but that has to be supported with benchmarks. Obviously, long, varying varchars are not good for PK. But indexes based on short, fix-length varchar are almost as fast as integers. The questions - Is there any canonical source that supports the "all-PK-must-be-surrogates" approach ? - Has Codd's relational model been superceded by a newer relational model ?

    Read the article

  • ArchBeat Link-o-Rama for 2012-09-28

    - by Bob Rhubart
    Follow the action: OTN's YouTube Channel Check out what's happening at Oracle OpenWorld and JavaOne with video coverage by the OTN crew. New interviews and more posted daily on the OTN YouTube channel. Whiteboards, not red carpets. OTN Architect Day Los Angeles. Oct 25. Free event. Yes, it's Tinsel Town, but the stars at this event are experts in the use of Oracle technologies in today's architectures. This free event includes a full slate of technical sessions and peer interaction covering cloud computing, SOA, and engineered systems–and lunch is on us. Register now. Thursday October 25, 2012, 8:00 a.m. – 5:00 p.m. Sofitel Los Angeles, 8555 Beverly Boulevard, Los Angeles, CA 90048 Overview about the 5th SOA, Cloud and Service Technology Symposium | Jan van Zoggel Middleware consultant and author Jan van Zoggel shares an overview of three of the sessions he attended at this week's SOA, Cloud, and Service Technology Forum in the UK. OOW 2012: Questions to get answered during this conference | Lucas Jellema Oracle ACE Director Lucas Jellema shares "a quick list of some of the questions that are on the top of my head to get answered during thus year's conference." The list may be quick, but it is quit detailed, and well worth a look. Front-ending a SAML Service Provider with OHS | Andre Correa Oracle Fusion Middleware A-Team member Andre Correa shares a follow-up to a previous post covering Integrating OBIEE 11g into Weblogic's SAML SSO. Thought for the Day "Simplicity is prerequisite for reliability." — Edsger W. Dijkstra (May 11, 1930 – August 6, 2002) Source: SoftwareQuotes.com

    Read the article

  • PASS Summit Survey

    - by andyleonard
    One item mentioned in the PASS Board Q & A was the PASS Summit survey, which should be in your Inbox today if you attended the PASS Summit 2013. Charlotte? Did you enjoy having the Summit in Charlotte? If so, the PASS Summit Survey is your primary and most effective means of communicating this fact to the PASS Board and PASS HQ. The same holds if you didn’t like the Summit in Charlotte. Would you rather have the Summit remain forever in Seattle? Would you like to see the Summit in Seattle two...(read more)

    Read the article

  • Strange date relationships with #PowerPivot

    - by Marco Russo (SQLBI)
    A reader of my PowerPivot book highlighted a strange behavior of the relationship between a datetime column and a Calendar table. Long story short: it seems that PowerPivot automatically round the date to the “neareast day”, but instead of simply removing the time (truncating the decimal part of the decimal number internally used to represent a datetime value) a rounding function seems used, moving the date to the next day if the time part contain a PM time. As you can imagine, this becomes particularly...(read more)

    Read the article

  • Monitoring Baseline

    - by Grant Fritchey
    Knowing what's happening on your servers is important, that's monitoring. Knowing what happened on your server is establishing a baseline. You need to do both. I really enjoyed this blog post by Ted Krueger (blog|twitter). It's not enough to know what happened in the last hour or yesterday, you need to compare today to last week, especially if you released software this weekend. You need to compare today to 30 days ago in order to begin to establish future projections. How your data has changed over 30 days is a great indicator how it's going to change for the next 30. No, it's not perfect, but predicting the future is not exactly a science, just ask your local weatherman. Red Gate's SQL Monitor can show you the last week, the last 30 days, the last year, or all data you've collected (if you choose to keep a year's worth of data or more, please have PLENTY of storage standing by). You have a lot of choice and control here over how much data you store. Here's the configuration window showing how you can set this up: This is for version 2.3 of SQL Monitor, so if you're running an older version, you might want to update. The key point is, a baseline simply represents a moment in time in your server. The ability to compare now to then is what you're looking for in order to really have a useful baseline as Ted lays out so well in his post.

    Read the article

  • Working with Reporting Services Filters–Part 5: OR Logic

    - by smisner
    When you combine multiple filters, Reporting Services uses AND logic. Once upon a time, there was actually a drop-down list for selecting AND or OR between filters which was very confusing to people because often it was grayed out. Now that selection is gone, but no matter. It wouldn’t help us solve the problem that I want to describe today. As with many problems, Reporting Services gives us more than one way to apply OR logic in a filter. If I want a filter to include this value OR that value for the same field, one approach is to set up the filter is to use the IN operator as I explained in Part 1 of this series. But what if I want to base the filter on two different fields? I  need a different solution. Using the AdventureWorksDW2008R2 database, I have a report that lists product sales: Let’s say that I want to filter this report to show only products that are Bikes (a category) OR products for which sales were greater than $1,000 in a year. If I set up the filter like this: Expression Data Type Operator Value [Category] Text = Bikes [SalesAmount]   > 1000 Then AND logic is used which means that both conditions must be true. That’s not the result I want. Instead, I need to set up the filter like this: Expression Data Type Operator Value =Fields!EnglishProductCategoryName.Value = "Bikes" OR Fields!SalesAmount.Value > 1000 Boolean = =True The OR logic needs to be part of the expression so that it can return a Boolean value that we test against the Value. Notice that I have used =True rather than True for the value. The filtered report appears below. Any non-bike product appears only if the total sales exceed $1,000, whereas Bikes appear regardless of sales. (You can’t see it in this screenshot, but Mountain-400-W Silver, 38 has sales of $923 in 2007 but gets included because it is in the Bikes category.)

    Read the article

  • PASS Summit 2012 Women In Technology Luncheon

    - by AllenMWhite
    My final stint at the Summit Blogger's Table(tm) is for the annual WIT luncheon. I do appreciate the honor that PASS conferred on me by inviting me to the "table" for the event, it's been a lot of fun (even if there were some moments that weren't.) Newly-elected board member Wendy Pastrick is the MC for this year's luncheon, and the panel consists of Stefanie Higgins, Denise McInerny, Kevin Kline, Jen Stirrup and Kendra Little. I'm pleased to say that I know each one of them except Stefanie Higgins,...(read more)

    Read the article

  • In SQLCMD mode, should CONNECT be an implicit batch separator?

    - by Greg Low
    Hi Folks, I've been working with SQLCMD mode again today and one thing about it always bites me. If I execute a script like: ::CONNECT SERVER1 SELECT @@VERSION; ::CONNECT SERVER2 SELECT @@VERSION; ::CONNECT SERVER3 SELECT @@VERSION; I'm sure I'm not the only person that would be surprised to see all three SELECT commands executed against SERVER3 and none executed against SERVER1 or SERVER2. If you think that's odd behavior, here's where to vote: https://connect.microsoft.com/SQLServer/feedback/details/611144/sqlcmd-connect-to-a-different-server-should-be-an-implicit-batch-separator#detail...(read more)

    Read the article

  • Entity system in Lua, communication with C++ and level editor. Need advice.

    - by Notbad
    Hi!, I know this is a really difficult subject. I have been reading a lot this days about Entity systems, etc... And now I'm ready to ask some questions (if you don't mind answering them) because I'm really messed. First of all, I have a 2D basic editor written in Qt, and I'm in the process of adding entitiy edition. I want the editor to be able to receive RTTI information from entities to change properties, create some logic being able to link published events to published actions (Ex:A level activate event throws a door open action), etc... Because all of this I guess my entity system should be written in scripting, in my case Lua. In the other hand I want to use a component based design for my entities, and here starts my questions: 1) Should I define my componentes en C++? If I do this en C++ won't I loose all the RTTI information I want for my editor?. In the other hand, I use box2d for physics, if I define all my components in script won't it be a lot of work to expose third party libs to lua? 2) Where should I place the messa system for my game engine? Lua? C++?. I'm tempted to just have C++ object to behave as servers, offering services to lua business logic. Things like physics system, rendering system, input system, World class, etc... And for all the other things, lua. Creation/Composition of entities based on components, game logic, etc... Could anyone give any insight on how to accomplish this? And what aproach is better?. Thanks in advance, HexDump.

    Read the article

  • Which forum software has the most advanced community/GetSatisfaction type features?

    - by Gaia
    I need to assemble a GetSatisfaction/Lithium/Jive type support forum/community. The first is not available in the desired language and the last two are priced for the enterprise market. I did research some other options (open source or SaaS) but they all seem to be either: kind of dead (open source options) too focused in gathering ideas/feedback (uservoice) strictly support without the community/voting features (zendesk) I need an open forum (people powered support/UGC with community/voting features). Therefore I will have to do some of the work on my own. I want to piece things (plugins/mods/etc) on top of a standard forum platform to give it the features I need. For this purpose, I want to use a mature product with widespread userbase, active community and lots of plugin options. I believe most will agree that my options therefore are: vBulletin phpBB SMF Here are the questions: Which one of the three above offers the easier path towards the desired goal? Which one of the three above has the most advanced features related to the desired goal? Of course I dont expect anyone to know these answers cut and dry. I am hoping to hear some experiences and see some examples. Also, it would be great if both those questions had the same answer, but I am not going to get my hopes up... PS: I wish I could add the tags "phpbb" and "smf" ;)

    Read the article

  • Dependencies not met on 12.04?

    - by Mochan
    Now I'm very aware that there are many questions out there that are quite similar to what I'm experiencing, but I have looked through many and I have not found a suitable answer. You are welcome to suggest questions that are similar, but I doubt that it will help. Getting on to the issue at hand, whenever I do anything that involves installation, whether it be codecs for videos, new programs or whatever the latter, I always get the 'Dependencies not met' error. In addition, I also get this notification in the panel: When clicked, the menu says this: "An error occurred. Please run Package Manager from the right-click menu or run apt-get in a terminal to see what is wrong. The error message was: ' Error: Broken Count 0'. This usually means your installed packages have unmet dependencies." It gives me three items to click: Show Updates Install all updates Check for Updates And then finally: Show Notifications (with a tick) Preferences When I try 'Install all Updates' (also Check Updates Install) it says this: and also this: As well as 'Ubuntu has experienced an internal error' and 'Did this error occur when moving from one version of Ubuntu to another?' (I clicked NO, because it didn't). So I took it's advice and ran sudo apt-get install -f This is what results: Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: libapt-pkg4.12:i386 The following packages will be upgraded: libapt-pkg4.12:i386 1 upgraded, 0 newly installed, 0 to remove and 87 not upgraded. 1 not fully installed or removed. Need to get 0 B/941 kB of archives. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? Y E: Internal Error, No file name for libapt-pkg4.12 When running sudo apt-get update it's all fine, but running sudo apt-get install -f still results in the same thing. I really have no idea what to do... can anyone help me?

    Read the article

  • Cell Transitions in Excel 2013 Preview–Fixed

    - by simonsabin
    If you’ve downloaded Excel 2013 and been working with it you may have noticed the new cell transition feature. Not sure why they put it in, it feels a bit like the aero interface which I understand has been dropped in windows 8. What you may have found is that the transition is buggy, Excel hangs, of the transition is jumpy. Well I found the fix on http://answers.microsoft.com/en-us/office/forum/office_home-excel/hardware-acceleration-problem-with-excel-2013/894da202-48c0-4442-a371-955587c1b7c0 For...(read more)

    Read the article

  • Microsoft BI Conference 2011 in Lisbon

    - by AlbertoFerrari
    Anyone interested in BI from Portugal or Spain should not miss the Microsoft BI Conference 2011 in Lisbon : one full day ( March, 25, 2011 ) with three tracks on Business Intelligence: Decision Makers BI pros Intro to BI. I am going to present two sessions on PowerPivot: one is a nice deep dive into DAX for BI pros, the other is about self service BI for decision makers. Titles and the complete agenda will be published in the next days, but I suggest to save the date. The full event is free and it...(read more)

    Read the article

  • AS3 How to check on non transparent pixels in a bitmapdata?

    - by Opoe
    I'm still working on my window cleaning game from one of my previous questions I marked a contribution as my answer, but after all this time I can't get it to work and I have to many questions about this so I decided to ask some more about it. As a sequel on my mentioned previous question, my question to you is: How can I check whether or not a bitmapData contains non transparent pixels? Subquestion: Is this possible when the masked image is a movieclip? Shouldn't I use graphics instead? Information I have: A dirtywindow movieclip on the bottom layer and a clean window movieclip on layer 2(mc1) on the layer above. To hide the top layer(the dirty window) I assign a mask to it. Code // this creates a mask that hides the movieclip on top var mask_mc:MovieClip = new MovieClip(); addChild(mask_mc) //assign the mask to the movieclip it should 'cover' mc1.mask = mask_mc; With a brush(cursor) the player wipes of the dirt ( actualy setting the fill from the mask to transparent so the clean window appears) //add event listeners for the 'brush' brush_mc.addEventListener(MouseEvent.MOUSE_DOWN,brushDown); brush_mc.addEventListener(MouseEvent.MOUSE_UP,brushUp); //function to drag the brush over the mask function brushDown(dragging:MouseEvent):void{ dragging.currentTarget.startDrag(); MovieClip(dragging.currentTarget).addEventListener(Event.ENTER_FRAME,erase) ; mask_mc.graphics.moveTo(brush_mc.x,brush_mc.y); } //function to stop dragging the brush over the mask function brushUp(dragging:MouseEvent):void{ dragging.currentTarget.stopDrag(); MovieClip(dragging.currentTarget).removeEventListener(Event.ENTER_FRAME,erase); } //fill the mask with transparant pixels so the movieclip turns visible function erase(e:Event):void{ with(mask_mc.graphics){ beginFill(0x000000); drawRect(brush_mc.x,brush_mc.y,brush_mc.width,brush_mc.height); endFill(); } }

    Read the article

  • PASS: Budget Status

    - by Bill Graziano
    Our budget situation is a little different this year than in years past.  We were late getting an initial budget approved.  There are a number of different reasons this occurred.  We had different competing priorities and the budget got pushed down the list.  And that’s completely my fault for not making the budget a higher priority and getting it completed on time. That left us with initial budget approval in early August rather than prior to June 30th.  Even after that there were a number of small adjustments that needed to be made.  And one large glaring mistake that needed to be fixed.  We had a typo in the budget that made it through twelve versions of review.  In my defense I can only say that the cell was red so of course it had to be negative!  And that’s one more mistake I can add to my long and growing list of Mistakes I’ll Never Make Again. Last week we passed a revised budget (version 17) with this corrected.  This is the version we’re cleaning up and posting to the web site this week or next.

    Read the article

  • JNDI Datasource definition in Tomcat 6.0

    - by romaintaz
    Hi all, I want to define a DataSource to an Oracle database on my Tomcat 6.0. So, in conf/server.xml (yes, I know that this DataSource will be available for all the webapps in Tomcat, but it's not a problem here), I've set this Resource: <GlobalNamingResources> <Resource name="hibernate/HibernateDS" auth="Container" type="javax.sql.DataSource" url="jdbc:oracle:thin:@myserver:1542:foo" username="foo" password="bar" driverClassName="oracle.jdbc.OracleDriver" maxActive="50" maxIdle="10" validationQuery="select 1 from dual"/> Then, in the web.xml of my application, I set a resource-ref element: <resource-ref> <description>Hibernate Datasource</description> <res-ref-name>hibernate/HibernateDS</res-ref-name> <res-type>javax.sql.DataSource</res-type> <res-auth>Container</res-auth> </resource-ref> Finally, as Hibernate is used to manage the database connection, I have a webapps/mywebapp/WEB-INF/classes/hibernate.cfg.xml that creates a session-factory using the JNDI DataSource: <hibernate-configuration> <session-factory> <property name="connection.datasource">java:comp/env/hibernate/HibernateDS</property> ... However, when I start my Tomcat server, I get an error that says it could not create the INFO [net.sf.hibernate.util.NamingHelper] JNDI InitialContext properties:{} INFO [net.sf.hibernate.connection.DatasourceConnectionProvider] Using datasource: java:comp/env/hibernate/HibernateDS INFO [net.sf.hibernate.transaction.TransactionFactoryFactory] Transaction strategy: net.sf.hibernate.transaction.JDBCTransactionFactory INFO [net.sf.hibernate.transaction.TransactionManagerLookupFactory] No TransactionManagerLookup configured (in JTA environment, use of process level read-write cache is not recommended) WARN [net.sf.hibernate.cfg.SettingsFactory] Could not obtain connection metadata org.apache.tomcat.dbcp.dbcp.SQLNestedException: Cannot create JDBC driver of class '' for connect URL 'null' at org.apache.tomcat.dbcp.dbcp.BasicDataSource.createDataSource(BasicDataSource.java:1150) at org.apache.tomcat.dbcp.dbcp.BasicDataSource.getConnection(BasicDataSource.java:880) at net.sf.hibernate.connection.DatasourceConnectionProvider.getConnection(DatasourceConnectionProvider.java:59) at net.sf.hibernate.cfg.SettingsFactory.buildSettings(SettingsFactory.java:84) at net.sf.hibernate.cfg.Configuration.buildSettings(Configuration.java:1172) ... Caused by: java.lang.NullPointerException at sun.jdbc.odbc.JdbcOdbcDriver.getProtocol(JdbcOdbcDriver.java:507) at sun.jdbc.odbc.JdbcOdbcDriver.knownURL(JdbcOdbcDriver.java:476) at sun.jdbc.odbc.JdbcOdbcDriver.acceptsURL(JdbcOdbcDriver.java:307) at java.sql.DriverManager.getDriver(DriverManager.java:253) at org.apache.tomcat.dbcp.dbcp.BasicDataSource.createDataSource(BasicDataSource.java:1143) ... 11 more Do you have any idea why Hibernate is not able to construct the session-factory? What is wrong in my configuration?

    Read the article

  • Rounding functions in DAX

    - by Marco Russo (SQLBI)
    Today I prepared a table of the many rounding functions available in DAX (yes, it’s part of the book we’re writing), so that I have a complete schema of the better function to use, depending on the round operation I need to do. Here is the list of functions used and then the results shown for a relevant set of values. FLOOR = FLOOR( Tests[Value], 0.01 ) TRUNC = TRUNC( Tests[Value], 2 ) ROUNDDOWN = ROUNDDOWN( Tests[Value], 2 ) MROUND = MROUND( Tests[Value], 0.01 ) ROUND = ROUND( Tests[Value], 2 )...(read more)

    Read the article

  • Perspective in Modeling

    - by drsql
    Your task, model a database that represents a suburban block.  You survey the area, and see the following houses (pictures culled from Wikipedia here ) and So you look at the houses, start modeling roofs, windows, lawn, driveway, mail boxes, porches, etc etc. You get done, and with your 30+ tables you are feeling great, right? I know I would be. “I knocked this out of the park! We can capture everything about these houses.  I…am…a…superhero database modeler,” I think, “I will get a big...(read more)

    Read the article

  • Analysing Indexes - count *

    - by GrumpyOldDBA
    In my presentations on indexing I have always said that you should explore the advantages of covering your clustered index with a secondary index. In circumstances where you might want to just return values form the PK ( assuming it's your clustered index ) a secondary index will be more efficient especially when the row size is wide. Any operation on a clustered index will always return the entire row, so select ID from dbo.mytable where ID is the clustered PK integer will return not just the...(read more)

    Read the article

  • SSIS - Range lookups

    - by Repieter
      When developing an ETL solution in SSIS we sometimes need to do range lookups in SSIS. Several solutions for this can be found on the internet, but now we have built another solution which I would like to share, since it's pretty easy to implement and the performance is fast.   You can download the sample package to see how it works. Make sure you have the AdventureWorks2008R2 and AdventureWorksDW2008R2 databases installed. (Apologies for the layout of this blog, I don't do this too often :))   To give a little bit more information about the example, this is basically what is does: we load a facttable and do an SCD type 2 lookup operation of the Product dimension. This is done with a script component.   First we query the Data warehouse to create the lookup dataset. The query that is used for that is:   SELECT     [ProductKey]     ,[ProductAlternateKey]     ,[StartDate]     ,ISNULL([EndDate], '9999-01-01') AS EndDate FROM [DimProduct]     The output of this query is stored in a DataTable:     string lookupQuery = @"                         SELECT                             [ProductKey]                             ,[ProductAlternateKey]                             ,[StartDate]                             ,ISNULL([EndDate], '9999-01-01') AS EndDate                         FROM [DimProduct]";           OleDbCommand oleDbCommand = new OleDbCommand(lookupQuery, _oleDbConnection);         OleDbDataAdapter adapter = new OleDbDataAdapter(oleDbCommand);           _dataTable = new DataTable();         adapter.Fill(_dataTable);     Now that the dimension data is stored in the DataTable we use the following method to do the actual lookup:   public int RangeLookup(string businessKey, DateTime lookupDate)     {         // set default return value (Unknown)         int result = -1;           DataRow[] filteredRows;         filteredRows = _dataTable.Select(string.Format("ProductAlternateKey = '{0}'", businessKey));           for (int i = 0; i < filteredRows.Length; i++)         {             // check if the lookupdate is found between the startdate and enddate of any of the records             if (lookupDate >= (DateTime)filteredRows[i][2] && lookupDate < (DateTime)filteredRows[i][3])             {                 result = (filteredRows[i][0] == null) ? -1 : (int)filteredRows[i][0];                 break;             }         }           filteredRows = null;           return result;     }       This method is executed for every row that passes the script component. This is implemented in the ProcessInputRow method   public override void Input0_ProcessInputRow(Input0Buffer Row)     {         // Perform the lookup operation on the current row and put the value in the Surrogate Key Attribute         Row.ProductKey = RangeLookup(Row.ProductNumber, Row.OrderDate);     }   Now what actually happens?!   1. Every record passes the business key and the orderdate to the RangeLookup method. 2. The DataTable is then filtered on the business key of the current record. The output is stored in a DataRow [] object. 3. We loop over the DataRow[] object to see where the orderdate meets the following expression: (lookupDate >= (DateTime)filteredRows[i][2] && lookupDate < (DateTime)filteredRows[i][3]) 4. When the expression returns true (so where the data is between the Startdate and the EndDate), the surrogate key of the dimension record is returned   We have done some testing with this solution and it works great for us. Hope others can use this example to do their range lookups.

    Read the article

  • Parameterized StreamInsight Queries

    - by Roman Schindlauer
    The changes in our APIs enable a set of scenarios that were either not possible before or could only be achieved through workarounds. One such use case that people ask about frequently is the ability to parameterize a query and instantiate it with different values instead of re-deploying the entire statement. I’ll demonstrate how to do this in StreamInsight 2.1 and combine it with a method of using subjects for dynamic query composition in a mini-series of (at least) two blog articles. Let’s start with something really simple: I want to deploy a windowed aggregate to a StreamInsight server, and later use it with different window sizes. The LINQ statement for such an aggregate is very straightforward and familiar: var result = from win in stream.TumblingWindow(TimeSpan.FromSeconds(5))               select win.Avg(e => e.Value); Obviously, we had to use an existing input stream object as well as a concrete TimeSpan value. If we want to be able to re-use this construct, we can define it as a IQStreamable: var avg = myApp     .DefineStreamable((IQStreamable<SourcePayload> s, TimeSpan w) =>         from win in s.TumblingWindow(w)         select win.Avg(e => e.Value)); The DefineStreamable API lets us define a function, in our case from a IQStreamable (the input stream) and a TimeSpan (the window length) to an IQStreamable (the result). We can then use it like a function, with the input stream and the window length as parameters: var result = avg(stream, TimeSpan.FromSeconds(5)); Nice, but you might ask: what does this save me, except from writing my own extension method? Well, in addition to defining the IQStreamable function, you can actually deploy it to the server, to make it re-usable by another process! When we deploy an artifact in V2.1, we give it a name: var avg = myApp     .DefineStreamable((IQStreamable<SourcePayload> s, TimeSpan w) =>         from win in s.TumblingWindow(w)         select win.Avg(e => e.Value))     .Deploy("AverageQuery"); When connected to the same server, we can now use that name to retrieve the IQStreamable and use it with our own parameters: var averageQuery = myApp     .GetStreamable<IQStreamable<SourcePayload>, TimeSpan, double>("AverageQuery"); var result = averageQuery(stream, TimeSpan.FromSeconds(5)); Convenient, isn’t it? Keep in mind that, even though the function “AverageQuery” is deployed to the server, its logic will still be instantiated into each process when the process is created. The advantage here is being able to deploy that function, so another client who wants to use it doesn’t need to ask the author for the code or assembly, but just needs to know the name of deployed entity. A few words on the function signature of GetStreamable: the last type parameter (here: double) is the payload type of the result, not the actual result stream’s type itself. The returned object is a function from IQStreamable<SourcePayload> and TimeSpan to IQStreamable<double>. In the next article we will integrate this usage of IQStreamables with Subjects in StreamInsight, so stay tuned! Regards, The StreamInsight Team

    Read the article

  • Genetic Considerations in User Interface Design

    - by John Paul Cook
    There are several different genetic factors that are highly relevant to good user interface design. Color blindness is probably the best known. But did you know about motion sickness and epilepsy? We’ve been discussing how genetic factors should be considered in user interface design in one of my classes at Vanderbilt University School of Nursing. According to the National Library of Medicine, approximately 8% of males and 0.5% of females have red-green color discrimination problems with the most...(read more)

    Read the article

< Previous Page | 904 905 906 907 908 909 910 911 912 913 914 915  | Next Page >