Search Results

Search found 341 results on 14 pages for 'atomic'.

Page 8/14 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • How to install specific version of MySQL?

    - by user85569
    I installed from the repo, 5.0.77... including setup of PowerDNS (and the backend for MySQL). I tried setting up replication from my Master (which is MySQL 5.1.53) but it didn't work even though there were no errors, nothing got replicated. So the last resort is to try the same MySQL version on both the master and the slave (nb, only the slave has pdns installed) How would I go about installing MySQL 5.1.53? I tried downloading the rpm from MySQL (obviously the wrong one, didn't even include the mysql command to shell into the databases), but in turn fucked up the dependencies for pdns' mysql backend. I have the atomic repo which will install MySQL 5.5 (both on my Master server and Slave), but I don't want to do a major upgrade on the master right now as it's in production. Would love some advice!

    Read the article

  • INNODB mysql. Plugin disabled

    - by alexcunn
    When I startup mysql on my unbuntu server I will get a message. 121122 17:39:37 [Note] Plugin 'FEDERATED' is disabled. 121122 17:39:37 InnoDB: The InnoDB memory heap is disabled 121122 17:39:37 InnoDB: Mutexes and rw_locks use GCC atomic builtins 121122 17:39:37 InnoDB: Compressed tables use zlib 1.2.3.4 121122 17:39:37 InnoDB: Initializing buffer pool, size = 128.0M InnoDB: mmap(137363456 bytes) failed; errno 12 121122 17:39:37 InnoDB: Completed initialization of buffer pool 121122 17:39:37 InnoDB: Fatal error: cannot allocate memory for the buffer pool 121122 17:39:37 [ERROR] Plugin 'InnoDB' init function returned error. 121122 17:39:37 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 121122 17:39:37 [ERROR] Unknown/unsupported storage engine: InnoDB 121122 17:39:37 [ERROR] Aborting 121122 17:39:37 [Note] mysqld: Shutdown complete a few times I have got a message saying that the plugin is disabled. I use webmin to configure it. Could that be a problem?

    Read the article

  • XFS and loss of data when power goes down

    - by culebrón
    Each time electricity goes down, my desktop (without UPS) loses some temporary information. Opera can lose settings, history, cache, or mail accounts (Thanks heavens I was wise to use IMAP). Partially or all together. a whole file (complete and save) in Geany appeared empty (and I didn't commit it to Git) rhythmbox lost all podcasts subscription data I'm afraid there are other losses I just didn't see. What's the reason? A memory files cache, a mem-disk? Or non-atomic file writes in xfs? I have Ubuntu 9.10 and XFS on both / and /home partitions. Is ext4 safer in such circumstances? I've seen ext3 is faster. Is it as safe as *4? Given that the apartment I rent is connected to a common bus and 1 safety switch for several apartments, and the neighbors - alone or together - overload it at least once every week, the lights go down often enough for this to be an issue.

    Read the article

  • Why does a group policy not applied to the domain administrator account?

    - by Saariko
    I have a working policy on my entire domain. I just found out, when logging with the domain administrator, that this policy is not applied (EDIT: Running : gpresult shows that the GPO's are applied - but, this GPO is for Drive Mappings, and the actual drive mappings are NOT shown) The administrator account - does not have any login script on his profile tab. My GPO's are mainly small/atomic settings: single GPO to handle each settings: UAC, Firewall, printers. GPO status for the object is enabled That's an overview of the Drive Maps: Reading on MS support site, I checked the delegation tab, and it is marked as applied to domain and enterprise admins. Every user gets these policies correctly. The OU that is set is the root of the domain. (for testing purpose - I did that to eliminate hierarchy issues - did not help) Block Inheritance is disabled. (never used it anyway) GPO link GPO Security Filterings

    Read the article

  • PHP 5.2 installation with GD on CentOS 6

    - by Pratik Thakkar
    I am trying to install PHP 5.2.17 on CentOS 6.2. I have downloaded RPMs from http://www6.atomicorp.com/channels/atomic/centos/6/i386/RPMs/ Problem is that the PHP RPM seems to have GD disabled by default. Hence in spite installing php-gd RPM, GD is disabled. Is there any way that I can enable GD. Atomicorp seems to be the only website that has PHP 5.2.17 RPMs. I am not an advanced user to be able to compile PHP. I would appreciate help on this.

    Read the article

  • Repositories for CentOS that don't suck?

    - by Keyo
    I'm used to using ubuntu/debian repositories and they are great. I can apt-get just about any package and it'll be there. I have not found this on centos. I called my hosting company and they suggest I install atomic turtle since it's compatible with cPanel. This didn't work when I tried to install git. yum install git ... No package git available Repeat the same thing for just about any package, the default repositories are pathetic. So perhaps there are other repositories I can use. Can anyone suggest any? Edit The problem was cPanel excluding some git dependencies in yum.conf. See http://www.cmdln.org/2010/05/07/install-git-on-centos-cpanel-server/

    Read the article

  • Updating PHP on a Plesk managed Server

    - by mblaettermann
    I just updated PHP and MySQl on my VPS with the current Versions from Atomic Repo. Everything worked out fine so far. From console I get the new PHP 5.3: [root@server phpMyAdmin]# php -v PHP 5.3.16 (cli) (built: Aug 20 2012 11:18:05) Copyright (c) 1997-2012 The PHP Group Zend Engine v2.3.0, Copyright (c) 1998-2012 Zend Technologies with the ionCube PHP Loader v4.0.5, Copyright (c) 2002-2011, by ionCube Ltd. But through Apache I still get the old version (5.1.6). The server is running some old version of crappy Plesk Panel. That gives me the option to choose between Apache Modul, fCGI and CGI-BIN. Any hints, how to update apache, so it will use the new PHP Version? EDIT: I just needed to restart httpd (/etc/init.d/httpd restart)

    Read the article

  • How to update OpenSSL using Putty and yum command

    - by JM4
    I am so new to updating server technologies it is unbelievable but we are trying to become PCI Compliant and have to update some of our server technologies. One in particular is OpenSSL. We are currently running arch i686 0.9.8e but we have to upgrade to ATLEAST 0.9.8g. When I run a yum update command, there are no updates available. If I run "yum info openssl" it says available packages are: arch i386 0.9.8e but the only difference is smaller file size. I am running the following repositories: Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * addons: mirrors.netdna.com * atomic: www6.atomicorp.com * base: mirrors.igsobe.com * extras: mirror.vcu.edu * updates: mirror.vcu.edu any help out there?

    Read the article

  • Clang warning flags for Objective-C development

    - by Macmade
    As a C & Objective-C programmer, I'm a bit paranoid with the compiler warning flags. I usually try to find a complete list of warning flags for the compiler I use, and turn most of them on, unless I have a really good reason not to turn it on. I personally think this may actually improve coding skills, as well as potential code portability, prevent some issues, as it forces you to be aware of every little detail, potential implementation and architecture issues, and so on... It's also in my opinion a good every day learning tool, even if you're an experienced programmer. For the subjective part of this question, I'm interested in hearing other developers (mainly C, Objective-C and C++) about this topic. Do you actually care about stuff like pedantic warnings, etc? And if yes or no, why? Now about Objective-C, I recently completely switched to the LLVM toolchain (with Clang), instead of GCC. On my production code, I usually set this warning flags (explicitly, even if some of them may be covered by -Wall): -Wall -Wbad-function-cast -Wcast-align -Wconversion -Wdeclaration-after-statement -Wdeprecated-implementations -Wextra -Wfloat-equal -Wformat=2 -Wformat-nonliteral -Wfour-char-constants -Wimplicit-atomic-properties -Wmissing-braces -Wmissing-declarations -Wmissing-field-initializers -Wmissing-format-attribute -Wmissing-noreturn -Wmissing-prototypes -Wnested-externs -Wnewline-eof -Wold-style-definition -Woverlength-strings -Wparentheses -Wpointer-arith -Wredundant-decls -Wreturn-type -Wsequence-point -Wshadow -Wshorten-64-to-32 -Wsign-compare -Wsign-conversion -Wstrict-prototypes -Wstrict-selector-match -Wswitch -Wswitch-default -Wswitch-enum -Wundeclared-selector -Wuninitialized -Wunknown-pragmas -Wunreachable-code -Wunused-function -Wunused-label -Wunused-parameter -Wunused-value -Wunused-variable -Wwrite-strings I'm interested in hearing what other developers have to say about this. For instance, do you think I missed a particular flag for Clang (Objective-C), and why? Or do you think a particular flag is not useful (or not wanted at all), and why?

    Read the article

  • SOA Suite 11gR1 Patch Set 2 (PS2) released today!

    - by Demed L'Her
      We just released this morning SOA Suite 11gR1 Patch Set 2 (PS2)! You can download it as usual from: OTN (main platforms only) eDelivery (all platforms)   11gR1 PS2 is delivered as a sparse installer, that is to say that it is meant to be applied on the latest full release (11gR1 PS1). The good part is that it’s great for existing PS1 users who simply need to apply the patch and run the patch assistant – the not so good part is that new users will first need to download PS1. What’s in that release? Bug fixes of course but also several significant new features. Here is a short selection of the most significant features in PS2: Spring component (for native Java extensibility and integration) SOA Partitions (to organize and manage your composites) Direct Binding (for transactional invocations to and from Oracle Service Bus) HTTP binding (for those of you trying to do away with SOAP and looking for simple GET and POST) Resequencer (for ordering out-of-order messages) WS Atomic Transactions (WS-AT) support (for propagation of transactions across heterogeneous environments) Check out the complete list of new features in PS2 for more (including links to the documentation for the above)! But maybe even more importantly we are also releasing Oracle Service Bus 11gR1 and BPM Suite 11gR1 at the same time – all on the same base platform (WebLogic Server 10.3.3)! (NB: it might take a while for all pages and caches to be updated with the new content so if you don’t find what you need today, try again soon!)   Technorati Tags: ps1,11gr1ps2,new release,oracle soa suite,oracle

    Read the article

  • dependency problems at installation from mysql-server-5.5

    - by Furtano
    qcons@014-QCONS:/var/lib$ sudo apt-get install -f mysql-server Paketlisten werden gelesen... Fertig Abhängigkeitsbaum wird aufgebaut Statusinformationen werden eingelesen... Fertig mysql-server ist schon die neueste Version. 0 aktualisiert, 0 neu installiert, 0 zu entfernen und 0 nicht aktualisiert. 2 nicht vollständig installiert oder entfernt. Nach dieser Operation werden 0 B Plattenplatz zusätzlich benutzt. Möchten Sie fortfahren [J/n]? j mysql-server-5.5 (5.5.28-0ubuntu0.12.04.2) wird eingerichtet ... 121112 11:16:52 [Note] Plugin 'FEDERATED' is disabled. 121112 11:16:52 InnoDB: The InnoDB memory heap is disabled 121112 11:16:52 InnoDB: Mutexes and rw_locks use GCC atomic builtins 121112 11:16:52 InnoDB: Compressed tables use zlib 1.2.3.4 121112 11:16:52 InnoDB: Initializing buffer pool, size = 128.0M 121112 11:16:52 InnoDB: Completed initialization of buffer pool 121112 11:16:52 InnoDB: highest supported file format is Barracuda. 121112 11:16:53 InnoDB: Waiting for the background threads to start 121112 11:16:54 InnoDB: 1.1.8 started; log sequence number 1595675 121112 11:16:54 InnoDB: Starting shutdown... 121112 11:16:54 InnoDB: Shutdown completed; log sequence number 1595675 start: Job failed to start invoke-rc.d: initscript mysql, action "start" failed. dpkg: Fehler beim Bearbeiten von mysql-server-5.5 (--configure): Unterprozess installiertes post-installation-Skript gab den Fehlerwert 1 zurück dpkg: Abhängigkeitsprobleme verhindern Konfiguration von mysql-server: mysql-server hängt ab von mysql-server-5.5; aber: Paket mysql-server-5.5 ist noch nicht konfiguriert. dpkg: Fehler beim Bearbeiten von mysql-server (--configure): Abhängigkeitsprobleme - verbleibt unkonfiguriert Es wurde kein Apport-Bericht verfasst, da die Fehlermeldung darauf hindeutet, dass dies lediglich ein Folgefehler eines vorherigen Problems ist. Fehler traten auf beim Bearbeiten von: mysql-server-5.5 mysql-server E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • mysql service do not launch

    - by ted
    Sorry for my English; I was trying to create db with rake in RoR application that has been configured for MySQL(gem installed, settings changed). After that attempt mysql-server broke: d@calister:~$ mysql ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) mysqld is not running at all: d@calister:~$ ps aux | grep mysql d 3769 0.0 0.0 4368 832 pts/0 S+ 18:03 0:00 grep --color=auto mysql And also it doesn't seem it would like to run: d@calister:~$ sudo service mysql start start: Job failed to start Any suggestions? Thanks EDIT: d@calister:~$ sudo -u mysql mysqld 120520 18:45:11 [Note] Plugin 'FEDERATED' is disabled. mysqld: Table 'mysql.plugin' doesn't exist 120520 18:45:11 [ERROR] Can't open the mysql.plugin table. Please run mysql_upgrade to create it. 120520 18:45:11 InnoDB: The InnoDB memory heap is disabled 120520 18:45:11 InnoDB: Mutexes and rw_locks use GCC atomic builtins 120520 18:45:11 InnoDB: Compressed tables use zlib 1.2.3.4 120520 18:45:11 InnoDB: Initializing buffer pool, size = 128.0M 120520 18:45:11 InnoDB: Completed initialization of buffer pool 120520 18:45:11 InnoDB: highest supported file format is Barracuda. 120520 18:45:12 InnoDB: Waiting for the background threads to start 120520 18:45:13 InnoDB: 1.1.8 started; log sequence number 1589459 120520 18:45:13 [ERROR] Fatal error: Can't open and lock privilege tables: Table 'mysql.host' doesn't exist

    Read the article

  • Is LINQ to objects a collection of combinators?

    - by Jimmy Hoffa
    I was just trying to explain the usefulness of combinators to a colleague and I told him LINQ to objects are like combinators as they exhibit the same value, the ability to combine small pieces to create a single large piece. Though I don't know that I can call LINQ to objects combinators. I've seen 2 levels of definition for combinator that I generalize as such: A combinator is a function which only uses things passed to it A combinator is a function which only uses things passed to it and other standard atomic functions but not state The first is very rigid and can be seen in the combinatory calculus systems and in haskell things like $ and . and various similar functions meet this rule. The second is less rigid and would allow something like sum as it uses the + function which was not passed in but is standard and not stateful. Though the LINQ extensions in C# use state in their iteration models, so I feel I can't say they're combinators. Can someone who understands the definition of a combinator more thoroughly and with more experience in these realms give a distinct ruling on this? Are my definitions of 'combinator' wrong to begin with?

    Read the article

  • When does the "Do One Thing" paradigm become harmful?

    - by Petr
    For the sake of argument here's a sample function that prints contents of a given file line-by-line. Version 1: void printFile(const string & filePath) { fstream file(filePath, ios::in); string line; while (file.good()) { getline(file, line); cout << line << endl; } } I know it is recommended that functions do one thing at one level of abstraction. To me, though code above does pretty much one thing and is fairly atomic. Some books (such as Robert C. Martin's Clean Code) seem to suggest breaking the above code into separate functions. Version 2: void printLine(const string & line) { cout << line << endl; } void printLines(fstream & file) { string line; while (file.good()) { getline(file, line); printLine(line); } } void printFile(const string & filePath) { fstream file(filePath, ios::in); printLines(file); } I understand what they want to achieve (open file / read lines / print line), but isn't it a bit of overkill? The original version is simple and in some sense already does one thing - prints a file. The second version will lead to a large number of really small functions which may be far less legible than the first version. Wouldn't it be, in this case, better to have the code at one place? At which point does the "Do One Thing" paradigm become harmful?

    Read the article

  • Understanding Service Compensation part of Industrial SOA series

    - by JuergenKress
    Some of the most important SOA design patterns that we have successfully applied in projects will be described in this article. These include the Compensation pattern and the UI mediator pattern, the Common Data Format pattern and the Data Access pattern. All of these patterns are included in Thomas Erl's book, "SOA Design Patterns", and are presented here in detail, together with our practical experiences. We begin our "best of" SOA pattern collection with the Compensation pattern. Compensation is required in error situations in an SOA, as multiple atomic service operations cannot generally be linked with classic transactions this would violate the principle of loose coupling. An error situation of this sort will occur, particularly if service operations are combined into processes or new services during orchestration or by applying the Composite pattern, and the transaction bracket has to be expanded as a result. We need mechanisms to undo the effects of individual services (the status changes in the overall system) and to ensure that a consistent system state is maintained at all times, so as to preserve system integrity. For the Compensation pattern, we would like to address the following questions: Why is compensation important in relation to SOA? How is the topic of compensation linked with the topic of transactions? What are the challenges with regard to compensation... Read the full article in the Service Technology Magazine or at OTN. Share your comments and feedback on the Industrial SOA series by using the hashtag #industrialsoa. Missed an article of the Industrial SOA series visit the overview at OTN. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: Industrial SOA,SOA,SOA Service Compensation,Community,Oracle SOA,Oracle BPM,OPN,Jürgen Kress

    Read the article

  • which style of member-access is preferable

    - by itwasntpete
    the purpose of oop using classes is to encapsulate members from the outer space. i always read that accessing members should be done by methods. for example: template<typename T> class foo_1 { T state_; public: // following below }; the most common doing that by my professor was to have a get and set method. // variant 1 T const& getState() { return state_; } void setState(T const& v) { state_ = v; } or like this: // variant 2 // in my opinion it is easier to read T const& state() { return state_; } void state(T const& v) { state_ = v; } assume the state_ is a variable, which is checked periodically and there is no need to ensure the value (state) is consistent. Is there any disadvantage of accessing the state by reference? for example: // variant 3 // do it by reference T& state() { return state_; } or even directly, if I declare the variable as public. template<typename T> class foo { public: // variant 4 T state; }; In variant 4 I could even ensure consistence by using c++11 atomic. So my question is, which one should I prefer?, Is there any coding standard which would decline one of these pattern? for some code see here

    Read the article

  • XPath & XML EDI B2B

    - by PearlFactory
    GoodToGo :) Best XML Editor is Altova XMLSpy 2011 http://www.torrenthound.com/hash/bfdbf55baa4ca6f8e93464c9a42cbd66450bb950/torrent-info/Altova-XMLSpy-Enterprise-Edition-SP1-2011-v13-0-1-0-h33t-com-Full For whatever reason Piratebay has trojans and other nasties..Search in torrent.eu for Altova XMLSpy Enterprise Edition SP1 2011 v13.0.1.0 Also if you like the product purchase it in a Commercial Enviroment Any well structured/complex XML can be parsed @ the speed of light using XPATH querys and not the C# objects XPathNodeIterator and others etc ....Never do loops  or Genirics or whatever highlevel language technology. Use the power of XPATH i.e Will use a Simple (Do while) as an example. We could have many different techs all achieveing the same result Instead of   xmlNI2 = xmlNav.Select("/p:BookShop")         if (xmlNI2.Count != 0)            {                                 while ((xmlNI2.MoveNext()))              string aNode =xmlNI2.SelectSingleNode('Book', nsmgr); if (aNode =="The Book I am after")       Console.WriteLine("Found My Book);   This lengthy cumbersome task can be achieved with a simple XPATH query Console.WriteLine((xmlNavg.SelectSingleNode("/p:BookShop/Book[.='The Book I am aFter ']", nsmgr)).Value.ToString()); Use the power of the parser and eliminate the middleman C#/MSIL/JIT etc etc Get Started Fast and use the parser as Outlined 1) Open XML and goto Grid Mode 2) Select XPATH tab on the bottom viewer/window as shown  From here you get intellisense and can quickly learn how to navigate/find the data using XPATH A key component to Navigation with XPATH is to use the "../ " command . This basically says from where I am now go up 1 level . With Xpath all commands are cumalative. i.e you can search for a book title @ the 2nd level of the XML and from there traverse 15 layers to paragraphs or words on a page with expression validation occuring throughout this process etc  (So in essence you may have arrived @ a node within the XML and have met 15 conditions along the way ) Given 1-2 days with XmlSpy and XPATH you unlock a technology that is super fast and simple to use. XML is a core component to what lays under the hood of so many techs. So it is no wonder that you want to be able to goto  the atomic level to achieve the result you want Justin P.S For a long time I saw XML as slow and a bit boring but now converted

    Read the article

  • How do functional languages handle a mocking situation when using Interface based design?

    - by Programmin Tool
    Typically in C# I use dependency injection to help with mocking; public void UserService { public UserService(IUserQuery userQuery, IUserCommunicator userCommunicator, IUserValidator userValidator) { UserQuery = userQuery; UserValidator = userValidator; UserCommunicator = userCommunicator; } ... public UserResponseModel UpdateAUserName(int userId, string userName) { var result = UserValidator.ValidateUserName(userName) if(result.Success) { var user = UserQuery.GetUserById(userId); if(user == null) { throw new ArgumentException(); user.UserName = userName; UserCommunicator.UpdateUser(user); } } ... } ... } public class WhenGettingAUser { public void AndTheUserDoesNotExistThrowAnException() { var userQuery = Substitute.For<IUserQuery>(); userQuery.GetUserById(Arg.Any<int>).Returns(null); var userService = new UserService(userQuery); AssertionExtensions.ShouldThrow<ArgumentException>(() => userService.GetUserById(-121)); } } Now in something like F#: if I don't go down the hybrid path, how would I test workflow situations like above that normally would touch the persistence layer without using Interfaces/Mocks? I realize that every step above would be tested on its own and would be kept as atomic as possible. Problem is that at some point they all have to be called in line, and I'll want to make sure everything is called correctly.

    Read the article

  • Where does ASP.NET Web API Fit?

    - by Rick Strahl
    With the pending release of ASP.NET MVC 4 and the new ASP.NET Web API, there has been a lot of discussion of where the new Web API technology fits in the ASP.NET Web stack. There are a lot of choices to build HTTP based applications available now on the stack - we've come a long way from when WebForms and Http Handlers/Modules where the only real options. Today we have WebForms, MVC, ASP.NET Web Pages, ASP.NET AJAX, WCF REST and now Web API as well as the core ASP.NET runtime to choose to build HTTP content with. Web API definitely squarely addresses the 'API' aspect - building consumable services - rather than HTML content, but even to that end there are a lot of choices you have today. So where does Web API fit, and when doesn't it? But before we get into that discussion, let's talk about what a Web API is and why we should care. What's a Web API? HTTP 'APIs' (Microsoft's new terminology for a service I guess)  are becoming increasingly more important with the rise of the many devices in use today. Most mobile devices like phones and tablets run Apps that are using data retrieved from the Web over HTTP. Desktop applications are also moving in this direction with more and more online content and synching moving into even traditional desktop applications. The pending Windows 8 release promises an app like platform for both the desktop and other devices, that also emphasizes consuming data from the Cloud. Likewise many Web browser hosted applications these days are relying on rich client functionality to create and manipulate the browser user interface, using AJAX rather than server generated HTML data to load up the user interface with data. These mobile or rich Web applications use their HTTP connection to return data rather than HTML markup in the form of JSON or XML typically. But an API can also serve other kinds of data, like images or other binary files, or even text data and HTML (although that's less common). A Web API is what feeds rich applications with data. ASP.NET Web API aims to service this particular segment of Web development by providing easy semantics to route and handle incoming requests and an easy to use platform to serve HTTP data in just about any content format you choose to create and serve from the server. But .NET already has various HTTP Platforms The .NET stack already includes a number of technologies that provide the ability to create HTTP service back ends, and it has done so since the very beginnings of the .NET platform. From raw HTTP Handlers and Modules in the core ASP.NET runtime, to high level platforms like ASP.NET MVC, Web Forms, ASP.NET AJAX and the WCF REST engine (which technically is not ASP.NET, but can integrate with it), you've always been able to handle just about any kind of HTTP request and response with ASP.NET. The beauty of the raw ASP.NET platform is that it provides you everything you need to build just about any type of HTTP application you can dream up from low level APIs/custom engines to high level HTML generation engine. ASP.NET as a core platform clearly has stood the test of time 10+ years later and all other frameworks like Web API are built on top of this ASP.NET core. However, although it's possible to create Web APIs / Services using any of the existing out of box .NET technologies, none of them have been a really nice fit for building arbitrary HTTP based APIs. Sure, you can use an HttpHandler to create just about anything, but you have to build a lot of plumbing to build something more complex like a comprehensive API that serves a variety of requests, handles multiple output formats and can easily pass data up to the server in a variety of ways. Likewise you can use ASP.NET MVC to handle routing and creating content in various formats fairly easily, but it doesn't provide a great way to automatically negotiate content types and serve various content formats directly (it's possible to do with some plumbing code of your own but not built in). Prior to Web API, Microsoft's main push for HTTP services has been WCF REST, which was always an awkward technology that had a severe personality conflict, not being clear on whether it wanted to be part of WCF or purely a separate technology. In the end it didn't do either WCF compatibility or WCF agnostic pure HTTP operation very well, which made for a very developer-unfriendly environment. Personally I didn't like any of the implementations at the time, so much so that I ended up building my own HTTP service engine (as part of the West Wind Web Toolkit), as have a few other third party tools that provided much better integration and ease of use. With the release of Web API for the first time I feel that I can finally use the tools in the box and not have to worry about creating and maintaining my own toolkit as Web API addresses just about all the features I implemented on my own and much more. ASP.NET Web API provides a better HTTP Experience ASP.NET Web API differentiates itself from the previous Microsoft in-box HTTP service solutions in that it was built from the ground up around the HTTP protocol and its messaging semantics. Unlike WCF REST or ASP.NET AJAX with ASMX, it’s a brand new platform rather than bolted on technology that is supposed to work in the context of an existing framework. The strength of the new ASP.NET Web API is that it combines the best features of the platforms that came before it, to provide a comprehensive and very usable HTTP platform. Because it's based on ASP.NET and borrows a lot of concepts from ASP.NET MVC, Web API should be immediately familiar and comfortable to most ASP.NET developers. Here are some of the features that Web API provides that I like: Strong Support for URL Routing to produce clean URLs using familiar MVC style routing semantics Content Negotiation based on Accept headers for request and response serialization Support for a host of supported output formats including JSON, XML, ATOM Strong default support for REST semantics but they are optional Easily extensible Formatter support to add new input/output types Deep support for more advanced HTTP features via HttpResponseMessage and HttpRequestMessage classes and strongly typed Enums to describe many HTTP operations Convention based design that drives you into doing the right thing for HTTP Services Very extensible, based on MVC like extensibility model of Formatters and Filters Self-hostable in non-Web applications  Testable using testing concepts similar to MVC Web API is meant to handle any kind of HTTP input and produce output and status codes using the full spectrum of HTTP functionality available in a straight forward and flexible manner. Looking at the list above you can see that a lot of functionality is very similar to ASP.NET MVC, so many ASP.NET developers should feel quite comfortable with the concepts of Web API. The Routing and core infrastructure of Web API are very similar to how MVC works providing many of the benefits of MVC, but with focus on HTTP access and manipulation in Controller methods rather than HTML generation in MVC. There’s much improved support for content negotiation based on HTTP Accept headers with the framework capable of detecting automatically what content the client is sending and requesting and serving the appropriate data format in return. This seems like such a little and obvious thing, but it's really important. Today's service backends often are used by multiple clients/applications and being able to choose the right data format for what fits best for the client is very important. While previous solutions were able to accomplish this using a variety of mixed features of WCF and ASP.NET, Web API combines all this functionality into a single robust server side HTTP framework that intrinsically understands the HTTP semantics and subtly drives you in the right direction for most operations. And when you need to customize or do something that is not built in, there are lots of hooks and overrides for most behaviors, and even many low level hook points that allow you to plug in custom functionality with relatively little effort. No Brainers for Web API There are a few scenarios that are a slam dunk for Web API. If your primary focus of an application or even a part of an application is some sort of API then Web API makes great sense. HTTP ServicesIf you're building a comprehensive HTTP API that is to be consumed over the Web, Web API is a perfect fit. You can isolate the logic in Web API and build your application as a service breaking out the logic into controllers as needed. Because the primary interface is the service there's no confusion of what should go where (MVC or API). Perfect fit. Primary AJAX BackendsIf you're building rich client Web applications that are relying heavily on AJAX callbacks to serve its data, Web API is also a slam dunk. Again because much if not most of the business logic will probably end up in your Web API service logic, there's no confusion over where logic should go and there's no duplication. In Single Page Applications (SPA), typically there's very little HTML based logic served other than bringing up a shell UI and then filling the data from the server with AJAX which means the business logic required for data retrieval and data acceptance and validation too lives in the Web API. Perfect fit. Generic HTTP EndpointsAnother good fit are generic HTTP endpoints that to serve data or handle 'utility' type functionality in typical Web applications. If you need to implement an image server, or an upload handler in the past I'd implement that as an HTTP handler. With Web API you now have a well defined place where you can implement these types of generic 'services' in a location that can easily add endpoints (via Controller methods) or separated out as more full featured APIs. Granted this could be done with MVC as well, but Web API seems a clearer and more well defined place to store generic application services. This is one thing I used to do a lot of in my own libraries and Web API addresses this nicely. Great fit. Mixed HTML and AJAX Applications: Not a clear Choice  For all the commonality that Web API and MVC share they are fundamentally different platforms that are independent of each other. A lot of people have asked when does it make sense to use MVC vs. Web API when you're dealing with typical Web application that creates HTML and also uses AJAX functionality for rich functionality. While it's easy to say that all 'service'/AJAX logic should go into a Web API and all HTML related generation into MVC, that can often result in a lot of code duplication. Also MVC supports JSON and XML result data fairly easily as well so there's some confusion where that 'trigger point' is of when you should switch to Web API vs. just implementing functionality as part of MVC controllers. Ultimately there's a tradeoff between isolation of functionality and duplication. A good rule of thumb I think works is that if a large chunk of the application's functionality serves data Web API is a good choice, but if you have a couple of small AJAX requests to serve data to a grid or autocomplete box it'd be overkill to separate out that logic into a separate Web API controller. Web API does add overhead to your application (it's yet another framework that sits on top of core ASP.NET) so it should be worth it .Keep in mind that MVC can generate HTML and JSON/XML and just about any other content easily and that functionality is not going away, so just because you Web API is there it doesn't mean you have to use it. Web API is not a full replacement for MVC obviously either since there's not the same level of support to feed HTML from Web API controllers (although you can host a RazorEngine easily enough if you really want to go that route) so if you're HTML is part of your API or application in general MVC is still a better choice either alone or in combination with Web API. I suspect (and hope) that in the future Web API's functionality will merge even closer with MVC so that you might even be able to mix functionality of both into single Controllers so that you don't have to make any trade offs, but at the moment that's not the case. Some Issues To think about Web API is similar to MVC but not the Same Although Web API looks a lot like MVC it's not the same and some common functionality of MVC behaves differently in Web API. For example, the way single POST variables are handled is different than MVC and doesn't lend itself particularly well to some AJAX scenarios with POST data. Code Duplication I already touched on this in the Mixed HTML and Web API section, but if you build an MVC application that also exposes a Web API it's quite likely that you end up duplicating a bunch of code and - potentially - infrastructure. You may have to create authentication logic both for an HTML application and for the Web API which might need something different altogether. More often than not though the same logic is used, and there's no easy way to share. If you implement an MVC ActionFilter and you want that same functionality in your Web API you'll end up creating the filter twice. AJAX Data or AJAX HTML On a recent post's comments, David made some really good points regarding the commonality of MVC and Web API's and its place. One comment that caught my eye was a little more generic, regarding data services vs. HTML services. David says: I see a lot of merit in the combination of Knockout.js, client side templates and view models, calling Web API for a responsive UI, but sometimes late at night that still leaves me wondering why I would no longer be using some of the nice tooling and features that have evolved in MVC ;-) You know what - I can totally relate to that. On the last Web based mobile app I worked on, we decided to serve HTML partials to the client via AJAX for many (but not all!) things, rather than sending down raw data to inject into the DOM on the client via templating or direct manipulation. While there are definitely more bytes on the wire, with this, the overhead ended up being actually fairly small if you keep the 'data' requests small and atomic. Performance was often made up by the lack of client side rendering of HTML. Server rendered HTML for AJAX templating gives so much better infrastructure support without having to screw around with 20 mismatched client libraries. Especially with MVC and partials it's pretty easy to break out your HTML logic into very small, atomic chunks, so it's actually easy to create small rendering islands that can be used via composition on the server, or via AJAX calls to small, tight partials that return HTML to the client. Although this is often frowned upon as to 'heavy', it worked really well in terms of developer effort as well as providing surprisingly good performance on devices. There's still plenty of jQuery and AJAX logic happening on the client but it's more manageable in small doses rather than trying to do the entire UI composition with JavaScript and/or 'not-quite-there-yet' template engines that are very difficult to debug. This is not an issue directly related to Web API of course, but something to think about especially for AJAX or SPA style applications. Summary Web API is a great new addition to the ASP.NET platform and it addresses a serious need for consolidation of a lot of half-baked HTTP service API technologies that came before it. Web API feels 'right', and hits the right combination of usability and flexibility at least for me and it's a good fit for true API scenarios. However, just because a new platform is available it doesn't meant that other tools or tech that came before it should be discarded or even upgraded to the new platform. There's nothing wrong with continuing to use MVC controller methods to handle API tasks if that's what your app is running now - there's very little to be gained by upgrading to Web API just because. But going forward Web API clearly is the way to go, when building HTTP data interfaces and it's good to see that Microsoft got this one right - it was sorely needed! Resources ASP.NET Web API AspConf Ask the Experts Session (first 5 minutes) © Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Fine-grained permissions on SVN Repository

    - by Jim
    Hello, I'm trying to setup a SVN repo for a whole bunch of users. Different users need to have different levels of access to areas of the repository. A trivial example might be that frontend engineers need access to the "view" and "controllers" but not "model", while backend engineers need access to the "controllers" and "model" but not "view". It needs to be one repository, because (as far as I know) that's the only way to ensure that commits touching multiple modules are atomic. Is there a fine-grained way to control user access to a repository? Thanks!

    Read the article

  • replace file with hardlink to another file atomically

    - by Ben Clifford
    I have two directory entries, a and b. Before, a and b point to different inodes. Afterwards, I want b to point to the same inode as a does. I want this to be safe - by which I mean if I fail somewhere, b either points to its original inode or the a inode. most especially I don't want to end up with b disappearing. mv is atomic when overwriting. ln appears to not work when the destination already exists. so it looks like i can say: ln a tmp mv tmp b which in case of failure will leave a 'tmp' file around, which is undesirable but not a disaster. Is there a better way to do this? (what I'm actually trying to do is replace files that have identical content with a single inode containing that content, shared between all directory entries)

    Read the article

  • Ruby on Rails protect_from_forgery best practice

    - by randombits
    I'm currently working on building a RESTful web api with ruby on rails. I haven't bothered putting a proper authentication scheme into the API yet as I'm ensuring that tests and the basic behavior of the API is working all locally first. Upon testing non-HTTP GET type requests such as HTTP POST/DELETE/PUT, stuff chokes because protect_from_forgery is on by default. How does this work when I'm working in practice since essentially the idea is in a RESTful API that there is no state. The client does not have a session or a cookie associated with the server. Each request is an atomic, self-executed request. The user will supply some credentials to ensure they are who they say they are, but other than that, does protect_from_forgery make sense at this point? Should it remain enabled?

    Read the article

  • Lock Free Queue -- Single Producer, Multiple Consumers

    - by Shirish
    Hello, I am looking for a method to implement lock-free queue data structure that supports single producer, and multiple consumers. I have looked at the classic method by Maged Michael and Michael Scott (1996) but their version uses linked lists. I would like an implementation that makes use of bounded circular buffer. Something that uses atomic variables? On a side note, I am not sure why these classic methods are designed for linked lists that require a lot of dynamic memory management. In a multi-threaded program, all memory management routines are serialized. Aren't we defeating the benefits of lock-free methods by using them in conjunction with dynamic data structures? I am trying to code this in C/C++ using pthread library on a Intel 64-bit architecture. Thank you, Shirish

    Read the article

  • (Google AppEngine) Memcache Lock Entry

    - by Friedrich
    Hi, i need a locking in memcache. Since all operations are atomic that should be an easy task. My idea is to use a basic spin-lock mechanism. So every object that needs locking in memcache gets a lock object, which will be polled for access. // pseudo code // try to get a lock int lock; do { lock = Memcache.increment("lock", 1); } while(lock != 1) // ok we got the lock // do something here // and finally unlock Memcache.put("lock", 0); How does such a solution perform? Do you have a better idea how to lock a memcache object? Best regards, Friedrich Schick

    Read the article

  • Terminate long running thread in thread pool that was created using QueueUserWorkItem(win 32/nt5).

    - by Jake
    I am programming in a win32 nt5 environment. I have a function that is going to be called many times. Each call is atomic. I would like to use QueueUserWorkItem to take advantage of multicore processors. The problem I am having is I only want to give the function 3 seconds to complete. If it has not completed in 3 seconds I want to terminate the thread. Currently I am doing something like this: HANDLE newThreadFuncCall= CreateThread(NULL,0,funcCall,&func_params,0,NULL); DWORD result = WaitForSingleObject(newThreadFuncCall, 3000); if(result == WAIT_TIMEOUT) { TerminateThread(newThreadFuncCall,WAIT_TIMEOUT); } I just spawn a single thread and wait for 3 seconds or it to complete. Is there anyway to do something similar to but using QueueUserWorkItem to queue up the work? Thanks!

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14  | Next Page >