Search Results

Search found 8550 results on 342 pages for 'datetime operation'.

Page 157/342 | < Previous Page | 153 154 155 156 157 158 159 160 161 162 163 164  | Next Page >

  • MERGE gives better OUTPUT options

    - by Rob Farley
    MERGE is very cool. There are a ton of useful things about it – mostly around the fact that you can implement a ton of change against a table all at once. This is great for data warehousing, handling changes made to relational databases by applications, all kinds of things. One of the more subtle things about MERGE is the power of the OUTPUT clause. Useful for logging.   If you’re not familiar with the OUTPUT clause, you really should be – it basically makes your DML (INSERT/DELETE/UPDATE/MERGE) statement return data back to you. This is a great way of returning identity values from INSERT commands (so much better than SCOPE_IDENTITY() or the older (and worse) @@IDENTITY, because you can get lots of rows back). You can even use it to grab default values that are set using non-deterministic functions like NEWID() – things you couldn’t normally get back without running another query (or with a trigger, I guess, but that’s not pretty). That inserted table I referenced – that’s part of the ‘behind-the-scenes’ work that goes on with all DML changes. When you insert data, this internal table called inserted gets populated with rows, and then used to inflict the appropriate inserts on the various structures that store data (HoBTs – the Heaps or B-Trees used to store data as tables and indexes). When deleting, the deleted table gets populated. Updates get a matching row in both tables (although this doesn’t mean that an update is a delete followed by an inserted, it’s just the way it’s handled with these tables). These tables can be referenced by the OUTPUT clause, which can show you the before and after for any DML statement. Useful stuff. MERGE is slightly different though. With MERGE, you get a mix of entries. Your MERGE statement might be doing some INSERTs, some UPDATEs and some DELETEs. One of the most common examples of MERGE is to perform an UPSERT command, where data is updated if it already exists, or inserted if it’s new. And in a single operation too. Here, you can see the usefulness of the deleted and inserted tables, which clearly reflect the type of operation (but then again, MERGE lets you use an extra column called $action to show this). (Don’t worry about the fact that I turned on IDENTITY_INSERT, that’s just so that I could insert the values) One of the things I love about MERGE is that it feels almost cursor-like – the UPDATE bit feels like “WHERE CURRENT OF …”, and the INSERT bit feels like a single-row insert. And it is – but into the inserted and deleted tables. The operations to maintain the HoBTs are still done using the whole set of changes, which is very cool. And $action – very convenient. But as cool as $action is, that’s not the point of my post. If it were, I hope you’d all be disappointed, as you can’t really go near the MERGE statement without learning about it. The subtle thing that I love about MERGE with OUTPUT is that you can hook into more than just inserted and deleted. Did you notice in my earlier query that my source table had a ‘src’ field, that wasn’t used in the insert? Normally, this would be somewhat pointless to include in my source query. But with MERGE, I can put that in the OUTPUT clause. This is useful stuff, particularly when you’re needing to audit the changes. Suppose your query involved consolidating data from a number of sources, but you didn’t need to insert that into the actual table, just into a table for audit. This is now very doable, either using the INTO clause of OUTPUT, or surrounding the whole MERGE statement in brackets (parentheses if you’re American) and using a regular INSERT statement. This is also doable if you’re using MERGE to just do INSERTs. In case you hadn’t realised, you can use MERGE in place of an INSERT statement. It’s just like the UPSERT-style statement we’ve just seen, except that we want nothing to match. That’s easy to do, we just use ON 1=2. This is obviously more convoluted than a straight INSERT. And it’s slightly more effort for the database engine too. But, if you want the extra audit capabilities, the ability to hook into the other source columns is definitely useful. Oh, and before people ask if you can also hook into the target table’s columns... Yes, of course. That’s what deleted and inserted give you.

    Read the article

  • Stuxnet aurait préalablement été testé par les militaires d'Israël, plus de deux ans avant son lancement sur l'Iran

    Stuxnet aurait préalablement été testé par les militaires d'Israël, plus de deux ans avant son lancement sur l'Iran Mise à jour du 17.01.2011 par Katleen De nouvelles informations viennent de tomber concernant Stuxnet, la bête noire de l'Iran. Le ver aurait ainsi été testé en secret dans une base militaire Israélienne (le complexe Dimona dans le désert de Negev) avant d'être lâché sur les infrastructure nucléaires iraniennes. Deux ans avant leur mise en oeuvre, les attaques auraient été simulées sur les mêmes machines que celles visées, pour garantir le succès futur de l'opération grandeur nature. En conséquence, des experts auraient examiné les failles des contrôleurs ...

    Read the article

  • Evolution has no access to couchdb

    - by berkes
    Evolution gives the error "Cannot open addressbook". "We were unable to open this addressbook. This either means you have entered an incorrect URI, or the server is unreachable". "Details: Operation not permitted". (rough translation from Dutch). Enabling verbose logging in (desktop)couchdb tells me roughly the same: [info] [<0.7875.1>] 127.0.0.1 - - 'PUT' /contacts/ 400 [debug] [<0.7875.1>] httpd 400 error response: {"error":"invalid_consumer","reason":"Invalid consumer (key or signature method)."} It seems that evolution tries to fetch the contacts, then couchdb denies access, and then evolution fails to do a proper oauth. This is on Ubuntu 10.10, with its default dektopcouch 1.0.1. Any hints on where to start would be most appreciated :)

    Read the article

  • Is it safe to run upgrade with Pinned Applications in Synaptic

    - by BlueXrider
    I have firefox, thunderbird, thunderbird-locale-en, thunderbird-locale-en-us, xul-ext-calendar-timezones, xul-ext-gdata-provider and xul-ext-lightning pinned in Synaptic. When I run apt-get upgrade. I get the following The following packages will be upgraded: boot-repair boot-sav darktable firefox libvlc5 libvlccore5 thunderbird thunderbird-locale-en thunderbird-locale-en-us vlc vlc-data vlc-nox vlc-plugin-notify vlc-plugin-pulse xul-ext-calendar-timezones xul-ext-gdata-provider xul-ext-lightning 17 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Need to get 53.0 MB of archives. After this operation, 606 kB of additional disk space will be used. Do you want to continue [Y/n]? Are these packages really going to be upgraded?

    Read the article

  • Prevent shutdown when rsnapshot is running

    - by highsciguy
    Since shutdowns during rsnapshot operation will lead to inconsistent/partial backups, I wonder how to delay the system shutdown while rsnapshot is active. The task is complicated by the fact that I need a solution which is compatible with non-expert users. I.e. I need to tell reliably to the user that he needs to wait until the process is finished and not to do a hard reset. Once this is the case shutdown should continue. A possible solution could be to replace the action of the window managers (mostly KDE) shutdown/restart/hibernate buttons by a script which first checks if rsync is active and shows a message if this is the case. But I do not know if this is possible in KDE.

    Read the article

  • Protecting Cookies: Once and For All

    - by Your DisplayName here!
    Every once in a while you run into a situation where you need to temporarily store data for a user in a web app. You typically have two options here – either store server-side or put the data into a cookie (if size permits). When you need web farm compatibility in addition – things become a little bit more complicated because the data needs to be available on all nodes. In my case I went for a cookie – but I had some requirements Cookie must be protected from eavesdropping (sent only over SSL) and client script Cookie must be encrypted and signed to be protected from tampering with Cookie might become bigger than 4KB – some sort of overflow mechanism would be nice I really didn’t want to implement another cookie protection mechanism – this feels wrong and btw can go wrong as well. WIF to the rescue. The session management feature already implements the above requirements but is built around de/serializing IClaimsPrincipals into cookies and back. But if you go one level deeper you will find the CookieHandler and CookieTransform classes which contain all the needed functionality. public class ProtectedCookie {     private List<CookieTransform> _transforms;     private ChunkedCookieHandler _handler = new ChunkedCookieHandler();     // DPAPI protection (single server)     public ProtectedCookie()     {         _transforms = new List<CookieTransform>             {                 new DeflateCookieTransform(),                 new ProtectedDataCookieTransform()             };     }     // RSA protection (load balanced)     public ProtectedCookie(X509Certificate2 protectionCertificate)     {         _transforms = new List<CookieTransform>             {                 new DeflateCookieTransform(),                 new RsaSignatureCookieTransform(protectionCertificate),                 new RsaEncryptionCookieTransform(protectionCertificate)             };     }     // custom transform pipeline     public ProtectedCookie(List<CookieTransform> transforms)     {         _transforms = transforms;     }     public void Write(string name, string value, DateTime expirationTime)     {         byte[] encodedBytes = EncodeCookieValue(value);         _handler.Write(encodedBytes, name, expirationTime);     }     public void Write(string name, string value, DateTime expirationTime, string domain, string path)     {         byte[] encodedBytes = EncodeCookieValue(value);         _handler.Write(encodedBytes, name, path, domain, expirationTime, true, true, HttpContext.Current);     }     public string Read(string name)     {         var bytes = _handler.Read(name);         if (bytes == null || bytes.Length == 0)         {             return null;         }         return DecodeCookieValue(bytes);     }     public void Delete(string name)     {         _handler.Delete(name);     }     protected virtual byte[] EncodeCookieValue(string value)     {         var bytes = Encoding.UTF8.GetBytes(value);         byte[] buffer = bytes;         foreach (var transform in _transforms)         {             buffer = transform.Encode(buffer);         }         return buffer;     }     protected virtual string DecodeCookieValue(byte[] bytes)     {         var buffer = bytes;         for (int i = _transforms.Count; i > 0; i—)         {             buffer = _transforms[i - 1].Decode(buffer);         }         return Encoding.UTF8.GetString(buffer);     } } HTH

    Read the article

  • Windows in StreamInsight: Hopping vs. Snapshot

    - by Roman Schindlauer
    Three weeks ago, we explained the basic concept of windows in StreamInsight: defining sets of events that serve as arguments for set-based operations, like aggregations. Today, we want to discuss the so-called Hopping Windows and compare them with Snapshot Windows. We will compare these two, because they can serve similar purposes with different behaviors; we will discuss the remaining window type, Count Windows, another time. Hopping (and its syntactic-sugar-sister Tumbling) windows are probably the most straightforward windowing concept in StreamInsight. A hopping window is defined by its length, and the offset from one window to the next. They are aligned with some absolute point on the timeline (which can also be given as a parameter to the window) and create sets of events. The diagram below shows an example of a hopping window with length of 1h and hop size (the offset) of 15 minutes, hence creating overlapping windows:   Two aspects in this diagram are important: Since this window is overlapping, an event can fall into more than one windows. If an (interval) event spans a window boundary, its lifetime will be clipped to the window, before it is passed to the set-based operation. That’s the default and currently only available window input policy. (This should only concern you if you are using a time-sensitive user-defined aggregate or operator.) The set-based operation will be applied to each of these sets, yielding a result. This result is: A single scalar value in case of built-in or user-defined aggregates. A subset of the input payloads, in case of the TopK operator. Arbitrary events, when using a user-defined operator. The timestamps of the result are almost always the ones of the windows. Only the user-defined  operator can create new events with timestamps. (However, even these event lifetimes are subject to the window’s output policy, which is currently always to clip to the window end.) Let’s assume we were calculating the sum over some payload field: var result = from window in source.HoppingWindow( TimeSpan.FromHours(1), TimeSpan.FromMinutes(15), HoppingWindowOutputPolicy.ClipToWindowEnd) select new { avg = window.Avg(e => e.Value) }; Now each window is reflected by one result event:   As you can see, the window definition defines the output frequency. No matter how many or few events we got from the input, this hopping window will produce one result every 15 minutes – except for those windows that do not contain any events at all, because StreamInsight window operations are empty-preserving (more about that another time). The “forced” output for every window can become a performance issue if you have a real-time query with many events in a wide group & apply – let me explain: imagine you have a lot of events that you group by and then aggregate within each group – classical streaming pattern. The hopping window produces a result in each group at exactly the same point in time for all groups, since the window boundaries are aligned with the timeline, not with the event timestamps. This means that the query output will become very bursty, delivering the results of all the groups at the same point in time. This becomes especially obvious if the events are long-lasting, spanning multiple windows each, so that the produced result events do not change their value very often. In such a case, a snapshot window can remedy. Snapshot windows are more difficult to explain than hopping windows: they represent those periods in time, when no event changes occur. In other words, if you mark all event start and and times on your timeline, then you are looking at all snapshot window boundaries:   If your events are never overlapping, the snapshot window will not make much sense. It is commonly used together with timestamp modification, which make it a very powerful tool. Or as Allan Mitchell expressed in in a recent tweet: “I used to look at SnapshotWindow() with disdain. Now she is my mistress, the one I turn to in times of trouble and need”. Let’s look at a simple example: I want to compute the average of some value in my events over the last minute. I don’t want this output be produced at fixed intervals, but at soon as it changes (that’s the true event-driven spirit!). The snapshot window will include all currently active event at each point in time, hence we need to extend our original events’ lifetimes into the future: Applying the Snapshot window on these events, it will appear to be “looking back into the past”: If you look at the result produced in this diagram, you can easily prove that, at each point in time, the current event value represents the average of all original input event within the last minute. Here is the LINQ representation of that query, applying the lifetime extension before the snapshot window: var result = from window in source .AlterEventDuration(e => TimeSpan.FromMinutes(1)) .SnapshotWindow(SnapshotWindowOutputPolicy.Clip) select new { avg = window.Avg(e => e.Value) }; With more complex modifications of the event lifetimes you can achieve many more query patterns. For instance “running totals” by keeping the event start times, but snapping their end times to some fixed time, like the end of the day. Each snapshot then “sees” all events that have happened in the respective time period so far. Regards, The StreamInsight Team

    Read the article

  • Asynchronous update design/interaction patterns

    - by Andy Waite
    These days many apps support asynchronous updates. For example, if you're looking at a list of widgets and you delete one of them then rather than wait for the roundtrip to the server, the app can hide the one you deleted, giving immediate feedback. The actual deletion on the server will happen in the background. This can be seen in web apps, desktop apps, iOS apps, etc. But what about when the background operation fails. How should you feed back to the user? Should you restore the UI to the pre-deletion state? What about when multiple background operations fail together? Does this behaviour/pattern have a name? Perhaps something based on the Command pattern?

    Read the article

  • Do CDNs work with POST operations?

    - by iddqd
    I'm using a CDN (Level3) for the first time and I'm a bit confused. I'm accessing dynamic URLs such as http://cdn.mysite.com?getItem=1234 that return text data. Do CDNs work with HTTP POST operations? When i issue a HTTP POST operation, my "real" server receives this request every time, so I'm wondering if the CDN has a problem with POST operations. If i use HTTP GET it seems to work, i call the URL once (from my application), i can see my server receiving the request. If i call it a second time, the CDN delivers it directly, my server doesn't get anything. However if i open same the link manually from a second browser tab, my server is asked to deliver again, shouldn't it be cached by now? Many thanks.

    Read the article

  • Where to draw the line between front end and back end

    - by Twincascos
    I was recently contracted to develop a smarty theme for an automated SOHO phone answering service. The team who had built the backend wouldn't allow me access to any of the back end nor tell me anything about it, their smarty set up, smarty plugins, data base interface api, server set-up, nothing. Nor could I have access to the server nor a beta domain, basically zero co-operation. So I set up a local server with Smarty and built the template based on what I guessed would be their best practice, commented my code like crazy, wrote all the needed javascript, css, and template files. Then I sent them packaged to the backend team and hoped for the best. With half of a project team failing to cooperate or even communicate I am now concerned that they may reply saying that everything is wrong and they may refuse to implement the new front end. I'm curious to know if others encounter this type of situation and what you may have done to protect yourselves.

    Read the article

  • CW/CCW Rotation of a Vector

    - by user23132
    Considering that I have a vector A, and after an arbitrary rotation I get vector B. I want to use this rotation operation in others vectors as well, but I'm having problems in doing that. My idea do that is to calculate the perpendicular vector C of the plane AB (by calculating AxB). This vector C is the axis that I'll need to rotate. To discover the angle I used the dot product between A and B, the acos of the dot product will return the lowest angle between A and B, the angle ang. The rotation I need to do is then: -rotate *ang*º around the C axis. The problem is that I dont know if this rotation is a CW or CCW rotation, since the cos of the dot product does not give me information of the sign of the angle. There's a tip discover that in 2D ( A.x * B.y - A.y * B.x) that you can use to discover if the vector A is at left/right of vector B. But I dont know how to do this in 3D space. Can anyone help me?

    Read the article

  • Order independent transparency in particle system

    - by Stepan Zastupov
    I'm writing a particle system and would like to find a trick to achieve proper alpha blending without sorting particles because: Each particle is a point sprite in a single mesh and I can't use scene graph ability to sort transparent nodes. The system node should be properly sorted, though. Particle position is computed on shader from initial velocity, acceleration and time. In order to sort the system I would have to perform all this computations on CPU, which is something I want to avoid. Sorting hundreds of particles against camera position and uploading it on GPU each frame seams to be quiet heavy operation. Alpha testing seems to be fast enough on GLES 2.0 and works fine for non-transparent but "masked" textures. Still, it's not enough for semi-transparent particles. How would you handle this?

    Read the article

  • Deux chercheurs transforment Kinect en assistant pour chirurgien, grâce à des drivers open-source

    Deux chercheurs transforment Kinect en assistant pour chirurgien, grâce à des drivers open-source Mise à jour du 24.12.2010 par Katleen Et une nouvelle exploitation de Kinect, une ! Celle-ci a été mise sur point en Suisse, par deux chercheurs de l'Université de médecine de Berne. Partant du principe qu'en cours d'opération, lorsqu'un chirurgien à besoin d'informations relatives au dossier de son patient ou à l'intervention en cours, il lui faut se lancer dans un protocole lui faisant perdre un temps précieux : enlever ses gants, aller à l'ordinateur, empoigner la souris, naviguer grâce à elle, puis remettre ses gants avant de reprendre le travail. Or, ces précieuses secondes peuvent de...

    Read the article

  • Bug once in a while,but high priority

    - by Shirish11
    I am working on a CNC (computer numerical control) project which cuts shapes into metal with help of laser. Now my problem is once in a while (1-2 times in 20 odd days) the cutting goes wrong or not according to what is set. But this causes loss so the client is not very happy about it. I tried to find out the the cause of it by Including log files Debugging Repeating the same environment. But it wont repeat. A pause and continue operation will again make it to run smoothly with the bug reappearing. How do I tackle this issue? Should I state it as a Hardware Problem?

    Read the article

  • Best practice for bulk eCommerce product upload?

    - by Or W
    I'm thinking about opening a large online store for Jewelry, the one thing that really bothers me is managing the actual operation of taking pictures, uploading and describing all the products. I'm trying to figure out the best way to do it, in terms of performance or the least time consuming. Just a few things to keep in mind I'll have over 1,000 items in the online store I'll have 3-4 pictures for each item, I'm using a DSLR camera if it makes any difference. I'm going to probably use Magento, unless you have better experience with another eCommerce platform that will help me get this done quickly. I'll need to randomly(?) create a product code for each item.

    Read the article

  • How to prevent system to generate log file

    - by shantanu
    My Question is little bit surprising, but i need it. I am using a slow processor laptop, now i found that HDD has some bad sectors and HDD response becomes slow. But disk health is ok(according to smart tools). I can not change my HDD right now. So decide to reduce disk operation. How do i prevent system to generate log file or any other file which are used to keep history? I know LOG file is very important but i don't care it right now. Please help.

    Read the article

  • sbackup: can not mount FTP automatically

    - by ledy
    In the sbackup configuration GUI i set the ftp://user:pw@online/storage and it's marked as successfully connected. After daily backup time, I checked the ftp and it was empty. In the error mail it says: Error in _do_mount: volume doesn't implement mount [ERROR_NOT_SUPPORTED - Operation not supported for the current backend.] Unable to mount: volume doesn't implement mount File access manager not initialized When restarting the sbackup GUI, it is no longer connected to the ftp and i have to click the button again to connect the remote directory - although it still knows my user/pw. How to save this permanently?

    Read the article

  • Where did my hard drive go?

    - by Mike Carron
    I installed XBMCbuntu 11.0 to my Zotac Zbox AD03 with an OCZ Reflex 4 256gb SSK. The install worked fine and I was getting accustomed to the appearance and operation. When I attempted to boot from power-off the BIOS could no longer find the SSD. It refused to boot and when I checked in the BIOS, the SSD was missing from the boot list (it was there prior to the install). I rebooted from the install CD but when the system started it could not find the SSD. I replaced the SSD with a fresh one of the same type and reinstalled XBMCbuntu. This time I rebooted from the system several times successfully but when I shut it down and tried to cold boot, this drive was also gone. Does the installation do something strange to the boot record that could cause a BIOS to lose it? How do I fix this? mike

    Read the article

  • Fixed Sized Buffer or Variable Buffers with C# Sockets

    - by Keagan Ladds
    I am busy designing a TCP Server class in C# that has events and allows the user of the class to define packets that the server can send a receive by registering a class that is derived from my "GenericPacket" class. My TCPListener uses Async methods such as .BeginReceive(..); My issue is that because I am using the .BeginReceive(); I need to specify a buffer size when I call the function. This means I cant read the whole packet if one of my defined packets is too big. I have thought of creating a fixed sized Header that gets read using .BeginRead(); and the read the rest using Stream.Read(); but this will lead to the whole server having to wait for this operation to complete. I would like to know if anyone has come across this before and I would appreciate any suggestions.

    Read the article

  • Un-used Indexes on MDP_MATRIX Consuming Resources

    - by user702295
    Disable un-used Indexes: As much as it is recommended to create relevant indexes, it is advised not to have too many indexes on the mdp_matrix table.  Too many indexes will cause long waits on the table as indexes needs to get updated every time the table is updated.  There are many seeded indexes on mdp_matrix, every out of the box data model level has an index on the matrix table.  If a level is unused in the specific data model of the implementation, it is advisable to disable that index.  If the customer is not sure if and how indexes are utilized, the DBA can monitor all indexes.  After a few cycles of operation, the DBA should go over that list and see which indexes have not been used.  Consider disabling them. There are scripts on the net to monitor indexes or use the monitoring usage clause in the alter index statement.

    Read the article

  • Service Layer - how broad should it be, and should it be used also on the local application?

    - by BornToCode
    Background: I need to build a main application with some operations (CRUD and more) (-in winforms), I need to make another application which will re-use some of the functions of the main application (-in webforms). I understood that using service layer is the best approach here. If I understood correctly the service should be calling the function on the BL layer (correct me if I'm wrong) The dilemma: In my main winform UI - should I call the functions from the BL, or from the service? (please explain why) Should I create a service for every single function on the BL even if I need some of the functions only in one UI? for example - should I create services for all the CRUD operations, even though I need to re-use only update operation in the webform? YOUR HELP IS MUCH APPRECIATED

    Read the article

  • What framework & technology would you use to make a site like fancy.com [on hold]

    - by adriancdperu
    Im about to start a 3 month process to build something similar to fancy.com. What are your opinions about: best language + framework? (server - side) = I was thinking about LAMP and CakePHP important technological issues to consider when developing? MAU assumption = 1 million. Main features: Registration with Facebook Normal registration Search articles Like articles Post articles Comment on articles Suggest articles to friends via mail, Facebook, Twitter and about 3 or 4 more apis Ranking of articles as a cron job done every minute, many criterias, many rankings Follow users Datamining users to mail everyday with articles they have high probability of liking Operation tools for admins to add articles and close user accounts Mobile focus

    Read the article

  • Should I use parentheses in logical statements even where not necessary?

    - by Jeff Bridgman
    Let's say I have a boolean condition a AND b OR c AND d and I'm using a language where AND has a higher order of operation precedent than OR. I could write this line of code: If (a AND b) OR (c AND d) Then ... But really, that's equivalent to: If a AND b OR c AND d Then ... Are there any arguments in for or against including the extraneous parentheses? Does practical experience suggest that it is worth including them for readability? Or is it a sign that a developer needs to really sit down and become confident in the basics of their language?

    Read the article

  • Unable to download microsoft excel files from a IIS SSL site

    - by Jeffrey
    The web master at my corporation added SSL to the web site and now none of my users can download Microsoft word and xcel files the sites generates. According to Microsoft the following must be down. Web sites that want to allow this type of operation should remove the no-cache header or headers. Typical of MS they don't tell you what to do, how to do it, or what the best practice is. The web master says its a web config setting. But all i can finds is <configuration> <appSettings/> <connectionStrings/> <system.web> <httpRuntime sendCacheControlHeader="false"/> and I don't know if this is the best way to achieve the result. I would greatly appreciate some advice on this subject.

    Read the article

  • How Does The Maybe Monad Relate To The Option Type?

    - by Onorio Catenacci
    I was doing a presentation on F# and was discussing the Option type when someone in the audience asked me if the Option type is F#'s implementation of the maybe monad. I know that's not the case but I did want to ask how the two concepts are related. I mean it seems to me that an option type might be the result of the operation of a maybe monad but I'm not even sure of that. Would someone elucidate the relationship between the maybe monad and the option type in those functional languages which support it?

    Read the article

< Previous Page | 153 154 155 156 157 158 159 160 161 162 163 164  | Next Page >