Search Results

Search found 26810 results on 1073 pages for 'fixed point'.

Page 566/1073 | < Previous Page | 562 563 564 565 566 567 568 569 570 571 572 573  | Next Page >

  • setting up my own name server

    - by mmokh
    I'm in the process of setting up my own name servers using BIND9, however I want to visualize the name server setup in relation to registrars and other name servers. Say I have a domain www.mydomain.com I setup my 2 name servers: ns1.mydomain.com - 192.168.0.1 ns2.mydomain.com - 192.168.0.2 1) How does the world know that my name servers are now at ns1.mydomain and ns2.mydomain. I read about setting up glue records at my registrar. Could you please elaborate on this, i.e. once i setup these glue records, can I now use my name servers in NS records for any other domain? For e.g. NS records for www.otherdomain.com - ns1.mydomain.com/ns2.mydomain.com 2) Given I setup the glue records as mentioned above, do I "have to" update mydomain.com NS records to point to my name servers? Can I keep mydomain.com NS records pointing to my registrars name servers, however use ns1.mydomain.com/ns2.mydomain.com as name servers for any other domain I own? Thanks

    Read the article

  • Leading Analyst Firm Positions Oracle in Leaders Quadrant for Web Content Management

    - by Christie Flanagan
    Gartner, Inc. has named Oracle a Leader in its latest “Magic Quadrant for Web Content Management.” Gartner’s Magic Quadrants position vendors within a particular quadrant based on their completeness of vision and their ability to execute on that vision. According to Gartner, “WCM plays an increasingly important role in business performance. It has become the central point of coordination for initiatives involving the enterprise's online presence, and these initiatives have become more sophisticated and more important to enterprises' business strategies. Thus, WCM is key for organizations wishing to execute a strategy of OCO (online channel optimization) that embraces areas such as customer experience management, e-commerce, digital marketing, multichannel marketing and website consolidation.” Gartner continued, “Leaders should drive market transformation. Leaders have the highest combined scores for Ability to Execute and Completeness of Vision. They are doing well and are prepared for the future with a clear vision and a thorough appreciation of the broader context of OCO. They have strong channel partners, a presence in multiple regions, consistent financial performance, broad platform support and good customer support. In addition, they dominate in one or more technologies or vertical markets. Leaders are aware of the ecosystem in which their offerings need to fit. Leaders can: demonstrate enterprise deployments’ offer integration with other business applications and content repositories; provide a vertical-process or horizontal-solution focus.” Oracle WebCenter, the engagement platform powering exceptional experiences for customers, employees and partners, connects people and information by bringing together the most complete portfolio of portal, Web experience management, content, social, and collaboration technologies into a single integrated product suite. Oracle WebCenter also provides the foundation for Oracle Fusion Middleware and Oracle Fusion Applications to deliver a next-generation user experience.  To see the latest reports, webcasts and demonstrations about Oracle's web experience management solution, Oracle WebCenter Sites, please visit our Connected Customer Experience Resource Center.

    Read the article

  • Need advice for approach for a web-based app that loads excel worksheet but exposes only the charts

    - by John
    I'm looking for suggestions on the Visual Studio approach to take for a web application that is in the conceptual stage. My environment has a lot of tools: Windows Server 2008 R2 Standard 64bit Visual Studio 2010 Professional Edition Sharepoint 2010 Server Enterprise Edition SQL Server 2008 R2 Office 2010 Professional I know I will need this app to retrieve data from a database (or a web service - not sure exactly at this point). The data needs to be placed in an Excel workbook dynamically. The app will need to have a nice user interface (standard web controls - perhaps with some Javascript effects). The Excel ribbon and worksheet grid will need to be hidden. Some web control(s) will cause the Excel chart(s) to be rendered. I am thinking this sounds like Visual Studio Tools for Office (VSTO) so as to leverage .Net and hide Excel. Can you offer suggestions regarding: One ASP.Net Web App Project One Class Library Project for Excel or perhaps which one of the several different Excel 2010 project types (addin, template, document) Would Excel Services for Sharepoint be useful (or required) ? I am feeling a little overwhelmed with so many choices at this early stage of conceptualizing the app. Can you suggest some ideas for this sort of thing? Also, I am a bit more experienced with C# but I've read VB.Net is better for work with the Excel object model. What are general advises with regard to tool choice and overall approach tradeoffs?

    Read the article

  • Locking down a server for shared internet hosting.

    - by Wil
    Basically I control several servers and I only host either static websites or scripts which I have designed, so I trust them up to a point. However, I have a few customers who want to start using scripts such as Wordpress or many others - and they want full control over their account. I have started to do the basics - like on php.ini, I have locked it down and restricted commands such as proc, however, there is obviously a lot more I can do. right now, using NTFS permissions, I am trying to lock down the server by running Application Pools and individual sites in their own user, however I feel like I am hitting brick walls... (My old question on Server Fault). At the moment, the only route I can think of is either to implement an off the shelf control panel - which will be expensive and quite frankly, over the top, or look at the Microsoft guide - which is really for an entire infrastructure, not for someone who just wants to lock down a few servers. Does anyone have any guides that can put me on the correct path?

    Read the article

  • How to run sshfs through ssh command?

    - by Koryonik
    I tried to run sshfs through ssh in one command. For example, if I do : $ ssh user@host user@host$ sshfs host:/src /target Everything is ok. Now, if I tried this in one command : ssh -t "sshfs host:/src /target" But not mounted point. By using sshfs debug option, it seems volume is mounted and immediately unmounted when ssh connection ended. I also tried to run sshfs in a login shell, but result is the same when exiting shell : ssh -t "/bin/sh -l -c sshfs host:/src /target && /bin/sh" What's wrong ? Is there one another best way?

    Read the article

  • Page File - Why set one for each drive?

    - by Magic
    Hello, I have Windows Vista Business Edition running on my laptop (brand is HCL). I have 4 HDD which are as follows - C - 29.2 GB (of which only 3.68 GB is free) D - 39 GB (of which 37.8 GB is free) E - 39 GB (of which 37.3 GB is free) F - 41.6 (of which 41.4 GB is free) However, my page file settings are as below. Automatically manage paging file for all drives. Question - Why should I set one for each drive? Should I set my page file on the OS Root Drive. I happen to talk to a System Administrator in an IT company and he advised that we should never set the page file on the OS drive but on an alternate drive wherever possible. It would be really helpful, if you can guide me here or at least point me to the right resources so that I can read about paging and best practices of paging. Cheers,

    Read the article

  • Brain picking during job interview

    - by mark
    Recently, I had a job interview at a big Silicon Valley company for a senior software developer/R&D position. I had several technical phone screens, an all day on-site interview and more technical phone screens for another position later. The interviews went really well, I have a PhD and working experience in the area I was applying for yet no offer was made. So far, so good. It was an interesting experience, I am employed, absolutely no hard feelings about this. Some of the interviewers asked really detailed questions to the point of being suspicious about new technologies I have been working on. These technologies are still in development and have not come to market yet. I know some major hardware/software companies are working on this too. I have had many interviews before and based on my former interviewing experience and the impression some of the interviewers left behind, I know now all this company wanted from me is to extract some ideas about what I did in this field. Remember, I am referring to a R&D position, not the standard software developer stuff. Has anybody encountered this situation so far? And how did you deal with it? I am not so much concerned about "stealing" ideas but more about being tricked into showing up for an interview when there is no intension to hire anyway. I am considering refusing technical interviews in the future and instead proposing a trial period in which the company can easily reconsider its hiring decision.

    Read the article

  • Two graphical entities, smooth blending between them (e.g. asphalt and grass)

    - by Gabriel Conrad
    Supposedly in a scenario there are, among other things, a tarmac strip and a meadow. The tarmac has an asphalt texture and its model is a triangle strip long that might bifurcate at some point into other tinier strips, and suppose that the meadow is covered with grass. What can be done to make the two graphical entities seem less cut out from a photo and just pasted one on top of the other at the edges? To better understand the problem, picture a strip of asphalt and a plane covered with grass. The grass texture should also "enter" the tarmac strip a little bit at the edges (i.e. feathering effect). My ideas involve two approaches: put two textures on the tarmac entity, but that involves a serious restriction in how the strip is modeled and its texture coordinates are mapped or try and apply a post-processing filter that mimics a bloom effect where "grass" is used instead of light. This could be a terrible failure to achieve correct results. So, is there a better or at least a more obvious way that's widely used in the game dev industry?

    Read the article

  • Architecture a for a central renderer rather than self-rendering

    - by The Communist Duck
    For the architectural side of rendering, there's two main ways: having each object render itself, and having a single renderer which renders everything. I'm currently aiming for the second idea, for the following reasons: The list can be sorted to only use shaders once. Else each object would have to bind the shader, because it's not sure if it's active. The objects could be sorted and grouped. Easier to swap APIs. With a few macro lines, it can be easy to swap between a DirectX renderer and an OpenGL renderer (not a reason for my project, but still a good point) Easier to manage rendering code Of course, if anyone has strong recommendations for the first method, I will listen to them. But I was wondering how make this work. First idea The renderer has a list of pointers to the renderable components of each entity, which register themselves on RenderCompoent creation. However, I'm worrying that this may end up as a lot of extra pointer weight. But I can sort the list of pointers every so often. Second idea The entire list of entities is passed to the renderer each render call. The renderer then sorts the list (each call, or maybe once?) and gets what it wants. That's a lot of passing and/or sorting, however. Other ideas ??? PROFIT Anyone got ideas? Thank you.

    Read the article

  • Windows server 2008 UPS support

    - by Rory McCune
    I'm looking to set-up a UPS on a Windows Small Business Berver 2k8 and I've noticed that there are some large price differences for similar capacity in-line UPSs. The most important point for me in UPS selection is that the server should have the ability to shut itself down before the UPS power runs out, so that if the server is unattended during the outage, it should minimize the risk of data loss. From some reading it appears that Windows Server 2008 should has the ability to natively recognise a UPS, which can then be managed through the battery settings on the server or via WMI. What I'm wondering if anyone know is, Is Windows 2008 servers UPS support specific to certain brands of UPS (eg, APC) or is it likely to work with any UPS which has a USB port, which I can connect to the server?

    Read the article

  • How to add a new developer to the team

    - by lortabac
    I run a small company composed of only 2 developers. For one of our clients we are building a very big application, whose development has gone on for 1.5 years. Now this client has found an important sponsorship, and they are organizing some events related to this project, so we have a deadline in 2 months and we can't miss it. We are thinking of adding a new developer to the team, and I am wondering what we can do to help his integration. This is the situation: We are approaching the threshhold of Brooks's law, the point when adding new developers will be counter-productive. The application is relatively well designed, but the implementation is chaotic in some points (especially older code). There are unit tests only for more recent code. When this project started, we didn't have the habit of doing tests. Documentation and comments are incomplete. The application is both large and complex. The client has written down almost every detail about his project, in a very clear and "programmer-friendly" way. Is it a good idea to add a person now? If so, what can we do in order to help the new developer integrate into the team?

    Read the article

  • Upgraded to 12.04 now wifi doesn't work

    - by Benito Kestelman
    My laptop's wifi stopped working when I upgraded to Ubuntu 12.04 (wired works). I just reinstalled 12.04 over my old 12.04 on which wifi didn't work either in an attempt to restore any settings I may have accidentally changed, but it still doesn't work. I also used a wired connection to install updates in case this bug has been fixed, but it has not. Here is the result of sudo lshw -class network: *-network description: Wireless interface product: Centrino Wireless-N + WiMAX 6150 vendor: Intel Corporation physical id: 0 bus info: pci@0000:02:00.0 logical name: wlan0 version: 67 serial: 40:25:c2:5f:5b:f4 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=iwlwifi driverversion=3.2.0-29-generic-pae firmware=41.28.5.1 build 33926 latency=0 link=no multicast=yes wireless=IEEE 802.11bgn resources: irq:51 memory:de800000-de801fff *-network description: Ethernet interface product: AR8151 v2.0 Gigabit Ethernet vendor: Atheros Communications Inc. physical id: 0 bus info: pci@0000:04:00.0 logical name: eth0 version: c0 serial: 14:da:e9:c0:da:78 capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress vpd bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=atl1c driverversion=1.0.1.0-NAPI firmware=N/A latency=0 link=no multicast=yes port=twisted pair resources: irq:54 memory:dd400000-dd43ffff ioport:a000(size=128) Here is rfkill list all: 0: phy0: Wireless LAN Soft blocked: no Hard blocked: no 1: asus-wlan: Wireless LAN Soft blocked: no Hard blocked: no 2: asus-wimax: WiMAX Soft blocked: no Hard blocked: no lsusb: Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 001 Device 003: ID 8087:07d6 Intel Corp. Bus 001 Device 004: ID 13d3:5710 IMC Networks Bus 002 Device 003: ID 045e:0745 Microsoft Corp. Nano Transceiver v1.0 for Bluetooth Bus 003 Device 003: ID 0781:5530 SanDisk Corp. Cruzer lspci: 00:00.0 Host bridge: Intel Corporation 2nd Generation Core Processor Family DRAM Controller (rev 09) 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) 00:16.0 Communication controller: Intel Corporation 6 Series/C200 Series Chipset Family MEI Controller #1 (rev 04) 00:1a.0 USB controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #2 (rev 05) 00:1b.0 Audio device: Intel Corporation 6 Series/C200 Series Chipset Family High Definition Audio Controller (rev 05) 00:1c.0 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 1 (rev b5) 00:1c.1 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 2 (rev b5) 00:1c.3 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 4 (rev b5) 00:1c.5 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 6 (rev b5) 00:1d.0 USB controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #1 (rev 05) 00:1f.0 ISA bridge: Intel Corporation HM65 Express Chipset Family LPC Controller (rev 05) 00:1f.2 SATA controller: Intel Corporation 6 Series/C200 Series Chipset Family 6 port SATA AHCI Controller (rev 05) 00:1f.3 SMBus: Intel Corporation 6 Series/C200 Series Chipset Family SMBus Controller (rev 05) 02:00.0 Network controller: Intel Corporation Centrino Wireless-N + WiMAX 6150 (rev 67) 03:00.0 USB controller: ASMedia Technology Inc. ASM1042 SuperSpeed USB Host Controller 04:00.0 Ethernet controller: Atheros Communications Inc. AR8151 v2.0 Gigabit Ethernet (rev c0)

    Read the article

  • Programming languages with a Lisp-like syntax extension mechanism

    - by Giorgio
    I have only a limited knowledge of Lisp (trying to learn a bit in my free time) but as far as I understand Lisp macros allow to introduce new language constructs and syntax by describing them in Lisp itself. This means that a new construct can be added as a library, without changing the Lisp compiler / interpreter. This approach is very different from that of other programming languages. E.g., if I wanted to extend Pascal with a new kind of loop or some particular idiom I would have to extend the syntax and semantics of the language and then implement that new feature in the compiler. Are there other programming languages outside the Lisp family (i.e. apart from Common Lisp, Scheme, Clojure (?), Racket (?), etc) that offer a similar possibility to extend the language within the language itself? EDIT Please, avoid extended discussion and be specific in your answers. Instead of a long list of programming languages that can be extended in some way or another, I would like to understand from a conceptual point of view what is specific to Lisp macros as an extension mechanism, and which non-Lisp programming languages offer some concept that is close to them.

    Read the article

  • Mac OS X Server 10.6.6 "disables" CPU in VMware Fusion

    - by wjlafrance
    Hello! I installed Mac OS X Server 10.6.0 in VMWare Fusion the other night and it worked perfectly, until I ran Software Update. I upgraded to 10.6.6 through the combo updater, and now when I start the VM it says: "The CPU has been disabled by the guest operating system. You will need to power off or reset the virtual machine at this point." I've switched the operating system in the options to OS X Server 32bit, 64bit, and even to Windows 7, and nothing has worked. Does anyone have any ideas?

    Read the article

  • Redmine Subversion: LDAP _and_ local auth

    - by Frank Brenner
    I need to set up a subversion repository with apache authentication against both an external LDAP server as well as the local Redmine database. That is, we have users whose accounts exist only in the LDAP directory and some users whose accounts only exist in the local Redmine db - all should be able to access the repo. I can't quite seem to get the apache config right for this. I know I saw a how-to for this at some point, I think using Redmine.pm, but I can't seem to find it anymore.. Thanks.

    Read the article

  • Learning good OOP design & unlearning some bad habits

    - by Nick
    I have been mostly a C programmer so far in my career with knowledge of C++. I rely on C++ mostly for the convenience STL provides and I hardly ever focus on good design practices. As I have started to look for a new job position, this bad habit of mine has come back to haunt me. During the interviews, I have been asked to design a problem (like chess, or some other scenario) using OOP and I doing really badly at that (I came to know this through feedback from one interview). I tried to google stuff and came up with so many opinions and related books that I don't know where to begin. I need a good through introduction to OOP design with which I can learn practical design, not just theory. Can you point me to any book which meets my requirements ? I prefer C++, but any other language is fine as long as I can pick-up good practices. Also, I know that books can only go so far. I would also appreciate any good practice project ideas that helped you learn and improve your OOP concepts. Thanks.

    Read the article

  • UNESCO, J-ISIS, and the JavaFX 2.2 WebView

    - by Geertjan
    J-ISIS, which is the newly developed Java version of the UNESCO generalized information storage and retrieval system for bibliographic information, continues to be under heavy development and code refactoring in its open source repository. Read more about J-ISIS and its NetBeans Platform basis here. Soon a new version will be available for testing and it would be cool to see the application in action at that time. Currently, it looks as follows, though note that the menu bar is under development and many menus you see there will be replaced or removed soon: About one aspect of the application, the browser, which you can see above, Jean-Claude Dauphin, its project lead, wrote me the following: The DJ-Native Swing JWebBrowser has been a nice solution for getting a Java Web Browser for most popular platforms. But the Java integration has always produced from time to time some strange behavior (like losing the focus on the other components after clicking on the Browser window, overlapping of windows, etc.), most probably because of mixing heavyweight and lightweight components and also because of our incompetency in solving the issues. Thus, recently we changed for the JavaFX 2.2 WebWiew. The integration with Java is fine and we have got rid of all the DJ-Native Swing problems. However, we have lost some features which were given for free with the native browsers such as downloading resources in different formats and opening them in the right application. This is a pretty cool step forward, i.e., the JavaFX integration. It also confirms for me something I've heard other people saying too: the JavaFX WebView component is a perfect low threshold entry point for Swing developers feeling their way into the world of JavaFX.

    Read the article

  • Philosophy behind the memento pattern

    - by TheSilverBullet
    I have been reading up on memento pattern from various sources of the internet. Differing information from different sources has left me in confusion regarding why this pattern is actually needed. The dofactory implementation says that the primary intention of this pattern is to restore the state of the system. Wiki says that the primary intention is to be able to restore the changes on the system. This gives a different impact - saying that it is possible for a system to have memento implementation with no need to restore. And that ability of restore is a feature of this. OODesign says that It is sometimes necessary to capture the internal state of an object at some point and have the ability to restore the object to that state later in time. Such a case is useful in case of error or failure. So, my question is why exactly do we use this one? Is it to save previous states - or to promote encapsulation between the Caretaker and the Memento? Why is this type of encapsulation so important? Edit: For those visiting, check out this Implementation!

    Read the article

  • StreamInsight 2.1, meet LINQ

    - by Roman Schindlauer
    Someone recently called LINQ “magic” in my hearing. I leapt to LINQ’s defense immediately. Turns out some people don’t realize “magic” is can be a pejorative term. I thought LINQ needed demystification. Here’s your best demystification resource: http://blogs.msdn.com/b/mattwar/archive/2008/11/18/linq-links.aspx. I won’t repeat much of what Matt Warren says in his excellent series, but will talk about some core ideas and how they affect the 2.1 release of StreamInsight. Let’s tell the story of a LINQ query. Compile time It begins with some code: IQueryable<Product> products = ...; var query = from p in products             where p.Name == "Widget"             select p.ProductID; foreach (int id in query) {     ... When the code is compiled, the C# compiler (among other things) de-sugars the query expression (see C# spec section 7.16): ... var query = products.Where(p => p.Name == "Widget").Select(p => p.ProductID); ... Overload resolution subsequently binds the Queryable.Where<Product> and Queryable.Select<Product, int> extension methods (see C# spec sections 7.5 and 7.6.5). After overload resolution, the compiler knows something interesting about the anonymous functions (lambda syntax) in the de-sugared code: they must be converted to expression trees, i.e.,“an object structure that represents the structure of the anonymous function itself” (see C# spec section 6.5). The conversion is equivalent to the following rewrite: ... var prm1 = Expression.Parameter(typeof(Product), "p"); var prm2 = Expression.Parameter(typeof(Product), "p"); var query = Queryable.Select<Product, int>(     Queryable.Where<Product>(         products,         Expression.Lambda<Func<Product, bool>>(Expression.Property(prm1, "Name"), prm1)),         Expression.Lambda<Func<Product, int>>(Expression.Property(prm2, "ProductID"), prm2)); ... If the “products” expression had type IEnumerable<Product>, the compiler would have chosen the Enumerable.Where and Enumerable.Select extension methods instead, in which case the anonymous functions would have been converted to delegates. At this point, we’ve reduced the LINQ query to familiar code that will compile in C# 2.0. (Note that I’m using C# snippets to illustrate transformations that occur in the compiler, not to suggest a viable compiler design!) Runtime When the above program is executed, the Queryable.Where method is invoked. It takes two arguments. The first is an IQueryable<> instance that exposes an Expression property and a Provider property. The second is an expression tree. The Queryable.Where method implementation looks something like this: public static IQueryable<T> Where<T>(this IQueryable<T> source, Expression<Func<T, bool>> predicate) {     return source.Provider.CreateQuery<T>(     Expression.Call(this method, source.Expression, Expression.Quote(predicate))); } Notice that the method is really just composing a new expression tree that calls itself with arguments derived from the source and predicate arguments. Also notice that the query object returned from the method is associated with the same provider as the source query. By invoking operator methods, we’re constructing an expression tree that describes a query. Interestingly, the compiler and operator methods are colluding to construct a query expression tree. The important takeaway is that expression trees are built in one of two ways: (1) by the compiler when it sees an anonymous function that needs to be converted to an expression tree, and; (2) by a query operator method that constructs a new queryable object with an expression tree rooted in a call to the operator method (self-referential). Next we hit the foreach block. At this point, the power of LINQ queries becomes apparent. The provider is able to determine how the query expression tree is evaluated! The code that began our story was intentionally vague about the definition of the “products” collection. Maybe it is a queryable in-memory collection of products: var products = new[]     { new Product { Name = "Widget", ProductID = 1 } }.AsQueryable(); The in-memory LINQ provider works by rewriting Queryable method calls to Enumerable method calls in the query expression tree. It then compiles the expression tree and evaluates it. It should be mentioned that the provider does not blindly rewrite all Queryable calls. It only rewrites a call when its arguments have been rewritten in a way that introduces a type mismatch, e.g. the first argument to Queryable.Where<Product> being rewritten as an expression of type IEnumerable<Product> from IQueryable<Product>. The type mismatch is triggered initially by a “leaf” expression like the one associated with the AsQueryable query: when the provider recognizes one of its own leaf expressions, it replaces the expression with the original IEnumerable<> constant expression. I like to think of this rewrite process as “type irritation” because the rewritten leaf expression is like a foreign body that triggers an immune response (further rewrites) in the tree. The technique ensures that only those portions of the expression tree constructed by a particular provider are rewritten by that provider: no type irritation, no rewrite. Let’s consider the behavior of an alternative LINQ provider. If “products” is a collection created by a LINQ to SQL provider: var products = new NorthwindDataContext().Products; the provider rewrites the expression tree as a SQL query that is then evaluated by your favorite RDBMS. The predicate may ultimately be evaluated using an index! In this example, the expression associated with the Products property is the “leaf” expression. StreamInsight 2.1 For the in-memory LINQ to Objects provider, a leaf is an in-memory collection. For LINQ to SQL, a leaf is a table or view. When defining a “process” in StreamInsight 2.1, what is a leaf? To StreamInsight a leaf is logic: an adapter, a sequence, or even a query targeting an entirely different LINQ provider! How do we represent the logic? Remember that a standing query may outlive the client that provisioned it. A reference to a sequence object in the client application is therefore not terribly useful. But if we instead represent the code constructing the sequence as an expression, we can host the sequence in the server: using (var server = Server.Connect(...)) {     var app = server.Applications["my application"];     var source = app.DefineObservable(() => Observable.Range(0, 10, Scheduler.NewThread));     var query = from i in source where i % 2 == 0 select i; } Example 1: defining a source and composing a query Let’s look in more detail at what’s happening in example 1. We first connect to the remote server and retrieve an existing app. Next, we define a simple Reactive sequence using the Observable.Range method. Notice that the call to the Range method is in the body of an anonymous function. This is important because it means the source sequence definition is in the form of an expression, rather than simply an opaque reference to an IObservable<int> object. The variation in Example 2 fails. Although it looks similar, the sequence is now a reference to an in-memory observable collection: var local = Observable.Range(0, 10, Scheduler.NewThread); var source = app.DefineObservable(() => local); // can’t serialize ‘local’! Example 2: error referencing unserializable local object The Define* methods support definitions of operator tree leaves that target the StreamInsight server. These methods all have the same basic structure. The definition argument is a lambda expression taking between 0 and 16 arguments and returning a source or sink. The method returns a proxy for the source or sink that can then be used for the usual style of LINQ query composition. The “define” methods exploit the compile-time C# feature that converts anonymous functions into translatable expression trees! Query composition exploits the runtime pattern that allows expression trees to be constructed by operators taking queryable and expression (Expression<>) arguments. The practical upshot: once you’ve Defined a source, you can compose LINQ queries in the familiar way using query expressions and operator combinators. Notably, queries can be composed using pull-sequences (LINQ to Objects IQueryable<> inputs), push sequences (Reactive IQbservable<> inputs), and temporal sequences (StreamInsight IQStreamable<> inputs). You can even construct processes that span these three domains using “bridge” method overloads (ToEnumerable, ToObservable and To*Streamable). Finally, the targeted rewrite via type irritation pattern is used to ensure that StreamInsight computations can leverage other LINQ providers as well. Consider the following example (this example depends on Interactive Extensions): var source = app.DefineEnumerable((int id) =>     EnumerableEx.Using(() =>         new NorthwindDataContext(), context =>             from p in context.Products             where p.ProductID == id             select p.ProductName)); Within the definition, StreamInsight has no reason to suspect that it ‘owns’ the Queryable.Where and Queryable.Select calls, and it can therefore defer to LINQ to SQL! Let’s use this source in the context of a StreamInsight process: var sink = app.DefineObserver(() => Observer.Create<string>(Console.WriteLine)); var query = from name in source(1).ToObservable()             where name == "Widget"             select name; using (query.Bind(sink).Run("process")) {     ... } When we run the binding, the source portion which filters on product ID and projects the product name is evaluated by SQL Server. Outside of the definition, responsibility for evaluation shifts to the StreamInsight server where we create a bridge to the Reactive Framework (using ToObservable) and evaluate an additional predicate. It’s incredibly easy to define computations that span multiple domains using these new features in StreamInsight 2.1! Regards, The StreamInsight Team

    Read the article

  • Troubleshoot odd large transaction log backups...

    - by Tim
    I have a SQL Server 2005 SP2 system with a single database that is 42gigs in size. It is a modestly active database that sees on average 25 transactions per second. The database is configured in Full recovery model and we perform transaction log backups every hour. However it seems to be pretty random at some point during the day the log backup will go from it's average size of 15megs all the way up to 40gigs. There are only 4 jobs that are scheduled to run on the SQL server and they are all typical backup jobs which occur on a daily/weekly basis. I'm not entirely sure of what client activity takes place as the application servers are maintained by a different department. Is there any good way to track down the cause of these log file growths and pinpoint them to a particular application, or client? Thanks in advance.

    Read the article

  • WMII Terminal Width of 80 Columns for xterm (colrules)

    - by BCable
    I'm trying to get WMII to split horizontally at 80 columns for xterm, but I'm only seeing a way to do this via percentage. It would be nice to be able to set it by something other than percentage for various resolutions, but if I have to deal with that I will. The problem is that even percentages don't work at my resolution (1366x768). 47+47 in /colrules yields 79 characters and 48+48 yields 81 characters. As far as I can tell, there is no decimal system allowed so I could do 47.5 for instance. I came from Ion3 and I'm used to using 80 column terminals, resizable by the keyboard, to get a reasonable cut off point for VIM when I'm coding. I would just settle with using the mouse, but WMII seems to be much more fluid than Ion3, so I would have to do it a LOT, which sounds annoying. Any ideas?

    Read the article

  • Can a webite have too many bindings?

    - by justSteve
    IIS7.x on a win08 web version on a dedicated server. I have a site that's serving a few dozen affiliates - many of which are hitting me via a subdomain from their own root domain - all of which have a subdomain specific to their account. E.G. my affiliate named 'Acme' hits my site via: myApp.Acme.com (his root, my app) Acme.MyDomain.com (his account within my root domain) Currently I'm adding each of these as a binding entry in IIS (targeting a discrete IP, not '*'). As I ramp this up to include more affiliates I'm wondering if I should be concerned about how many binding this site handles. Proabaly, in Acme's case I can do without the 'Acme.MyDomain.com' because, in reality, all traffic takes place via myApp.Acme.com. Mine is a niche site - very volume compared to most. At what point do I worry about all those bindings? thx

    Read the article

  • to measure throughput of testing device connect to server via AP

    - by samantha
    Description of project- I have a test tool to which DUT connects. The test tool has an access point in it and once DUT get connected to it via mac address we check RSSI and some other features of WiFi of DUT. Now I am wondering is there is any way I can measure throughput of Device under test via mac address of DUT from server side. Test-tool has LINUX fedora 11 in it and major coding is done in c/C++ and json command. Previously, I have tried to install ftp server on test-tool and DUT can connect to it and we can measure the throughput or data transfer rate, but this is not feasible solution as it requires lot of intervention from DUT. What I am interested in is 1) To run some script on server side /test tool and it gives me throughput of bandwidth of connected device may be via mac address of DUT OR 2) Server script transfer some files/packets to DUT and we can measure the throughput. Coding is not a major challenge at this stage , I just need some tool which requires minimum intervention from DUT.

    Read the article

  • Trying to make changes to the size of the events buffer in prelude-ids auditd plugin

    - by tharris
    I am running systems using the prelude-ids plugin for auditd. When the manager is up every thing works fine however I have a requirement that when the clients can't talk to the manager they should store no more than 250MB of messages, and when they hit that point they should start deleting the oldest events. All I can find is that audispd can be set to an overflow action of ignore,syslog,suspend,single, and halt none of which meet my requirement, and several of which I really cannot use. Does anyone know a way to do this? I know the events get stored in /var/spool/prelude/auditd/global, but I can't find anything about configuring how things are stored here. There are usually several files in the global directory but only 2 of them ever go above 0 in size, data0 and data0.journal.

    Read the article

  • How do I set up different configurations on an Ubuntu laptop based on different physical locations?

    - by Andrew Larned
    I'm looking for a way to have a couple (or three) multiple configurations set up on my laptop, and easily switch between them. To be more specific, when my laptop is at work, it's plugged into a second monitor, and has a specific set of networks configurations. At home, the second monitor is gone, the network configurations are different. At a public wireless point there are other configurations to set, etc. I know I can go into my preferences and turn on/turn off the monitor, and mess with the networking preferences, and so on, but I'm looking for a way to change a bunch of preferences all at once, and if it's possible to do that automatically, maybe based on the wireless APs in the vicinity, that would be even better.

    Read the article

< Previous Page | 562 563 564 565 566 567 568 569 570 571 572 573  | Next Page >