Search Results

Search found 20336 results on 814 pages for 'connection strings'.

Page 400/814 | < Previous Page | 396 397 398 399 400 401 402 403 404 405 406 407  | Next Page >

  • What language available on commodity web hosts would suit a C# developer? [closed]

    - by billpg
    Recognising its ubiquity on commodity web hosting services, I tried developing in PHP a few years ago. I really didn't like it, later deciding that life was too short for PHP. (In brief, having to put $ on variable names; mis-spelt variable names become new variables; converting non-numeric strings to integers without complaint; the need for an "and this time I mean it" comparison operator.) In my ideal world, commodity web hosts would all support C#/ASP.NET, my preferred web-development language and framework, but this is not my ideal world. Even Mono has barely made a dent on Linux based hosts. However, last time I moaned about PHP's ubiquity, someone followed up that this was no longer the case, and that many other languages are now commonly usable on web hosts too. What programming language; a. Would suit a developer who prefers C#. b. Is available to run on many web hosts.

    Read the article

  • 25 years old and considering a career change...possible? practical?

    - by mq330
    Hi all, I'm new to this site and new to programming as well. I've spent some time going through an intro cs book that uses python as the language of choice. I find the exercises interesting and engaging and I generally have had a favorable experience programming so far. I've gone through some of the basics with python like writing simple programs, basics of GUIs, manipulating strings, lists, defining functions, etc. And I've always loved technology. Although I've never done any real hardcore programming yet, I was inclined to building websites from a very young age but I never really developed my skills. Now, the thing is I'm 25, I have my bacholors in environmental studies and two masters degrees in urban planning and landscape architecture respectively. I know, it would be quite a departure to pursue a career in programming at this point. Currently, I'm working as a geographic information systems intern. I've taken some GIS classes and have a lot of experience with making maps, doing spatial analysis etc. So what I'm thinking is maybe I can learn some solid programming skills and apply these skills in the field of GIS. From what I've seen, .net languages are the norm in this arena. Could you perhaps provide some guidance to me in terms of what languages I should focus on or courses I should take at this point? What about for building web mapping applications? Also, I was thinking about getting a certificate in programming from a university extension program. Do you think it would be worth it? And furthermore, do you think potential employers would be interested in hiring someone like me (once I get a couple of languages down pretty well) as an intern or in an entry level position? I'll be living in the bay area so I feel that there should be decent opportunities even though I don't have a b.s. in cs.

    Read the article

  • QueryUnit 0.0.0.8 – Trust No One

    - by Davide Mauri
    Yesterday I’ve release an updated version of QueryUnit, the version 0.0.0.8. QueryUnit now supports AreNotEqual, Greater, and Less assertions and is more capable of managing strings results. I must say that I cannot live anymore without a proper Unit Testing of a BI solution. Just yesterday happened that one of the unit tests at a customer site failed showing a subtle situation where the release of a new version of custom application would have corrupted the source of BI data with a very low chance that someone would have noticed it before several days. It may happen when you have more the 15 systems that handles the data needed by your BI solution. The key message of this situation is “Trust No One”: if your data hasn’t passed quality testing it’s not trustable. Period. QueryUnit is now officialy an hero :) No superpowers still, but useful above all. http://queryunit.codeplex.com/ Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Stylecop 4.7.39.0 has been released

    - by TATWORTH
    Stylecop  4.7.38.0 has been released at http://stylecop.codeplex.com/releases/view/79972The release notes follow:Allow case sensitivity in the deprecated words and recognised words listStyleing fixes.Fix for documentation spelling checks inside nested xml nodes.Look for CustomDictionary.xml files in the folder of the cs file.Update the TabIndex in the spelling tab.Updating default deprecated words and their alternatives.Add support for specifying dictionary folders in the settings.StyleCop file. Like :Rename StyleCopViolationError to StyleCopHighlightingError and all associated types.Fix the Bulb Item for spelling mistakes to replace matching words correctly.Fix the spelling parser for strings beginning with $$THREADING FIX: Make StyleCop execute analysis in proces and not create 2 threads. Use Countdown Event when we move to .NET 4.Use the naming service for the Culture specified for the project. Pass the actual violation through to ReSharper.Ensure Registry access code works for VS2008 addins.Rollback Registry changes to ensure VS2008 plugin loads correctly.Adding support for preferred alternative words for spelling. Adding deprecated word support into Settings.StyleCop file. Spelling is only checked if Office 2010 is installed. Allow editing of deprecated words and their alternatives in the Settings editor.Adding new resource stringsAdding BulbItem and Quick fixes for spelling errors.Moving StringExtensions to common area.Styling fixes.Report all spelling errors found on a line.Start of 4.7.39.0 dev.

    Read the article

  • How do spambots work?

    - by rlb.usa
    I have a forum that's getting hit a lot by forum spambots, and of course the best way to defeat something is to know thy enemy. I'll worry about defeating those spambots later, but right now I'd like to know more about them. Reading around, I felt surprised about the lack of thorough information on the subject (or perhaps my ineptness to input the correct search terms for better google results). I'm interested in learning all about spambots. I've asked on other forums and gotten brush-off answers like "Spambots are always users registering on your site." How do forum spambots work? How do they find the 'new user registration' page? (I'm especially surprised because some forums don't have a dedicated URL for this eg, www.forum.com/register.html , but instead use query strings or even other methods invisible to the URL bar) How do they know what to enter into each 'new user registration' field? How do they determine what's a page they can spam / enter data into and what is not? Do they even 'view' this page at all? ..If not, then I'd assume they're communicating with the server directly - how is - this possible? How do they do it? Can forum spambots break CAPTCHAs? Can they solve logic questions (how?)? Math questions? Do they reverse-engineer client-side anti-bot validation scripts? Server-side scripts? What techniques are still valid to prevent them? Where do spambots come from? Is someone sitting behind the computer snickering as they watch their bot destroy site after site? Or are they snickering as they simply 'release' it onto the internet somehow? Are spambots 'run' by an infected computer somewhere? Do they replicate themselves? etc

    Read the article

  • cannot install firmware-b43-installer

    - by unknown
    output i get when installing the installArchives() failed: Preconfiguring packages ... Preconfiguring packages ... Preconfiguring packages ... Selecting previously deselected package menu. (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 216125 files and directories currently installed.) Unpacking menu (from .../menu_2.1.44ubuntu1_i386.deb) ... Selecting previously deselected package wifi-radar. Unpacking wifi-radar (from .../wifi-radar_2.0.s05-1.2_all.deb) ... Processing triggers for man-db ... Processing triggers for install-info ... Processing triggers for doc-base ... Processing 1 added doc-base file(s)... Registering documents with scrollkeeper... Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for desktop-file-utils ... Processing triggers for python-gmenu ... Rebuilding /usr/share/applications/desktop.en_US.UTF8.cache... Processing triggers for python-support ... Setting up firmware-b43-installer (4.150.10.5-5) ... --2012-10-26 08:51:30-- http://mirror2.openwrt.org/sources/broadcom-wl-4.150.10.5.tar.bz2 Resolving mirror2.openwrt.org... 46.4.11.11 Connecting to mirror2.openwrt.org|46.4.11.11|:80... failed: Connection refused. dpkg: error processing firmware-b43-installer (--configure): subprocess installed post-installation script returned error exit status 4 No apport report written because MaxReports is reached already Setting up menu (2.1.44ubuntu1) ... Processing triggers for menu ... Setting up wifi-radar (2.0.s05-1.2) ... Processing triggers for menu ... Errors were encountered while processing: firmware-b43-installer Setting up firmware-b43-installer (4.150.10.5-5) ... --2012-10-26 08:51:33-- http://mirror2.openwrt.org/sources/broadcom-wl-4.150.10.5.tar.bz2 Resolving mirror2.openwrt.org... 46.4.11.11 Connecting to mirror2.openwrt.org|46.4.11.11|:80... failed: Connection refused. dpkg: error processing firmware-b43-installer (--configure): subprocess installed post-installation script returned error exit status 4 same thing occurs when i try to install any of the wireless application. All other software installs and the same error when trying to install firmware. I tried to go the link(http://mirror2.openwrt.org/sources/broadcom-wl-4.150.10.5.tar.bz2) and download the package but found no make file found in his package. please help me.

    Read the article

  • My feenix nascita mouse isn't moving my cursor. Ubuntu version appears to not matter. How do I debug this, and get the mouse working?

    - by NullVoxPopuli
    I had 13.04, upgraded to 13.10, and did a fresh install of 13.10, no luck. Any ideas? Just tail -f'd my syslog and plugged in the mouse. here is what I got: Oct 26 16:15:50 Orithyia kernel: [83369.618365] usb 1-1.5.2: new full-speed USB device number 6 using ehci-pci Oct 26 16:15:50 Orithyia kernel: [83369.718913] usb 1-1.5.2: New USB device found, idVendor=04d9, idProduct=a081 Oct 26 16:15:50 Orithyia kernel: [83369.718919] usb 1-1.5.2: New USB device strings: Mfr=1, Product=2, SerialNumber=0 Oct 26 16:15:50 Orithyia kernel: [83369.718921] usb 1-1.5.2: Product: USB Gaming Mouse Oct 26 16:15:50 Orithyia kernel: [83369.718924] usb 1-1.5.2: Manufacturer: Holtek Oct 26 16:15:50 Orithyia kernel: [83369.722486] input: Holtek USB Gaming Mouse as /devices/pci0000:00/0000:00:1a.0/usb1/1-1/1-1.5/1-1.5.2/1-1.5.2:1.0/input/input13 Oct 26 16:15:50 Orithyia kernel: [83369.722786] hid-generic 0003:04D9:A081.0004: input,hidraw3: USB HID v1.10 Keyboard [Holtek USB Gaming Mouse] on usb-0000:00:1a.0-1.5.2/input0 Oct 26 16:15:50 Orithyia kernel: [83369.729362] hid-generic 0003:04D9:A081.0005: usage index exceeded Oct 26 16:15:50 Orithyia kernel: [83369.729366] hid-generic 0003:04D9:A081.0005: item 0 2 2 2 parsing failed Oct 26 16:15:50 Orithyia kernel: [83369.729379] hid-generic: probe of 0003:04D9:A081.0005 failed with error -22 Oct 26 16:15:50 Orithyia kernel: [83369.731759] hid-generic 0003:04D9:A081.0006: hiddev0,hidraw4: USB HID v1.10 Device [Holtek USB Gaming Mouse] on usb-0000:00:1a.0-1.5.2/input2 Oct 26 16:15:50 Orithyia mtp-probe: checking bus 1, device 6: "/sys/devices/pci0000:00/0000:00:1a.0/usb1/1-1/1-1.5/1-1.5.2" Oct 26 16:15:50 Orithyia mtp-probe: bus: 1, device: 6 was not an MTP device So, I see it did see it is a mouse... but the cursor doesn't move when I move the mouse. Not sure where to debug from here.

    Read the article

  • decouple software components via nameconvention

    - by csteinmueller
    I'm currently evaluating alternatives to refactor a drivermanagement. In my multitier architecture I have Baseclass DAL.Device //my entity Interfaces BL.IDriver //handles the dataprocessing between application and device BL.IDriverCreator //creates an IDriver from a Device BL.IDriverFactory //handles the driver creation requests Every specialization of Device has a corresponding IDriver implementation and a corresponding IDriverCreator implementation. At the moment the mapping is fix via a type check within the business layer / DriverFactory. That means every new driver needs a) changing code within the DriverFactory and b) referencing the new IDriver implementation / assembly. On a customers point of view that means, every new driver, used or not, needs a complex revalidation of their hardware environment, because it's a critical process. My first inspiration was to use a caliburn micro like nameconvention see Caliburn.Micro: Xaml Made Easy BL.RestDriver BL.RestDriverCreator DAL.RestDevice After receiving the RestDevicewithin the IDriverFactory I can load all driver dlls via reflection and do a namesplitting/comparing (extracting the xx from xxDriverCreator and xxDevice) Another idea would be a custom attribute (which also leads to comparing strings). My question: is that a good approach above layer borders? If not, what would be a good approach?

    Read the article

  • What is meant by "no password set" for root account (and otthers)?

    - by MMA
    Several years back, we were more accustomed to changing to the root account using the su command. First, we switched to the root account, and then executed those root commands. Now we are more accustomed to using the sudo command. But we know that the root account is there. We can readily find the home directory of user root. $ ls -ld /root/ drwx------ 18 root root 4096 Oct 22 17:21 /root/ Now my point is, it is stated that "the root password in Ubuntu is left unset". Please see the answers to this question. Most of the answers have something to this effect in the first paragraph. One or two answers further state that "the account is left disabled". Now my (primary) questions are, What is meant by an unset password? Is it blank? Is it null? Or something else more cryptic? How does the account becomes enabled once I set password for it? (sudo password root) In order get a better understanding, I checked the /etc/shadow file. Since I have already set a password for the root account, I can no longer see what is there (encrypted password). So, I created another account and left it disabled. The corresponding entry in the /etc/shadow file is, testpassword:!:16020:0:99999:7::: Now perhaps my above queries need to be changed to, what does an ! in password field mean? Other encrypted passwords are those very long cryptic strings. How come this encrypted form is only one character long? And does an account become disabled if I put an ! in the (encrypted) password field?

    Read the article

  • "bug" in C++11 text by Stroustrup?

    - by Astara
    I found an apparent contradiction in the c++ text having to do with the result of the c_str() function operating on std:strings (in my copy, the definition and contradiction are on p1040). First it defines the c_str() function as something that produces a 'C-style' (zero-terminated) string, but later it talks about how a C++ c_str value can have embed a 'C'-style, end-of-string terminators (i.e. NUL's) embedded in the string (that is defined by being NUL terminated). Um... does anyone else feel that this is a 'stretching' of the definition of a C-string beyond it's definition? I.e. I think what it means, is that if you were to look at the length() function as applied to the string, it will show a different end of string than using the C-definition of a z-string -- one that can contain any character except NUL, and is terminated by NUL. I likely don't have to worry about it in my of my programs, but it seems like a subtle distinction that makes a C++ c_str, not really a 'C'-string. Am I misunderstanding this issue? Thanks!

    Read the article

  • Creating a Predicate Builder extension method

    - by Rippo
    I have a Kendo UI Grid that I am currently allowing filtering on multiple columns. I am wondering if there is a an alternative approach removing the outer switch statement? Basically I want to able to create an extension method so I can filter on a IQueryable<T> and I want to drop the outer case statement so I don't have to switch column names. private static IQueryable<Contact> FilterContactList(FilterDescriptor filter, IQueryable<Contact> contactList) { switch (filter.Member) { case "Name": switch (filter.Operator) { case FilterOperator.StartsWith: contactList = contactList.Where(w => w.Firstname.StartsWith(filter.Value.ToString()) || w.Lastname.StartsWith(filter.Value.ToString()) || (w.Firstname + " " + w.Lastname).StartsWith(filter.Value.ToString())); break; case FilterOperator.Contains: contactList = contactList.Where(w => w.Firstname.Contains(filter.Value.ToString()) || w.Lastname.Contains(filter.Value.ToString()) || (w.Firstname + " " + w.Lastname).Contains( filter.Value.ToString())); break; case FilterOperator.IsEqualTo: contactList = contactList.Where(w => w.Firstname == filter.Value.ToString() || w.Lastname == filter.Value.ToString() || (w.Firstname + " " + w.Lastname) == filter.Value.ToString()); break; } break; case "Company": switch (filter.Operator) { case FilterOperator.StartsWith: contactList = contactList.Where(w => w.Company.StartsWith(filter.Value.ToString())); break; case FilterOperator.Contains: contactList = contactList.Where(w => w.Company.Contains(filter.Value.ToString())); break; case FilterOperator.IsEqualTo: contactList = contactList.Where(w => w.Company == filter.Value.ToString()); break; } break; } return contactList; } Some additional information, I am using NHibernate Linq. Also another problem is that the "Name" column on my grid is actually "Firstname" + " " + "LastName" on my contact entity. We can also assume that all filterable columns will be strings.

    Read the article

  • How can I convince my boss that ANSI C is inadequate for our new project?

    - by justifiably cowardly
    A few months ago, we started developing an app to control an in-house developed test equipment and record a set of measurements. It should have a simple UI, and would likely require threads due to the continuous recording that must take place. This application will be used for a few years, and shall be maintained by a number of computer science students during this period. Our boss graduated some 30 years ago (not to be taken as an offense; I have more than half that time on my back too) and has mandated that we develop this application in ANSI C. The rationale is that he is the only one that will be around the entire time, and therefore he must be able to understand what we are doing. He also ruled that we should use no abstract data types; he even gave us a list with the name of the global variables (sigh) he wants us to use. I actually tried that approach for a while, but it was really slowing me down to make sure that all pointer operations were safe and all strings had the correct size. Additionally, the number of lines of code that actually related to the problem in hand was a only small fraction of our code base. After a few days, I scrapped the entire thing and started anew using C#. Our boss has already seen the program running and he likes the way it works, but he doesn't know that it's written in another language. Next week the two of us will meet to go over the source code, so that he "will know how to maintain it". I am sort of scared, and I would like to hear from you guys what arguments I could use to support my decision. Cowardly yours,

    Read the article

  • determine an application's process name on linux (ubuntu)

    - by Jacob
    This is the situation: Working on (the next version of) a Unity quicklist editor, I would like to add a reliable way of "restarting" launcher icons. To do so, I need to remove the icon (editing gsettings) and replace it on the same position. So far no problem. However, if the application in question is running, user will possibly lose data, as the application will quit when it's icon is removed from the launcher. What I need is a reliable way to find an application's process name, to let the editor check in the list of running processes if the application is running, and send a warning message to the user that the icon can not be restarted if the application is running. What i did so far is make the editor look into the desktop file, to read the command, also read the command, stripped from the directory section, and furthermore look into possible remote scripts the desktop file command might refer to, looking for strings starting with "./" Although te method seems to work well with all applications I tested it on, I have the feeling there must be an easier way to cover the problem in an "all in one" way... Is there? also suggestions to catch more exceptional situations are welcome!

    Read the article

  • Nashorn ?? JDBC ? Oracle DB ?????·?? 2

    - by Homma
    ???? Nashorn ?? JavaScript ??????? JDBC ? Oracle DB ?????????????????????????????????????JDBC ???????????????????????????????? ??????????????????????????? ????? URL ? https://blogs.oracle.com/nashorn_ja/entry/nashorn_jdbc_2 ??? ????????? JDBC ????? ??????????????? ${ORACLE_HOME}/jdbc/lib/ojdbc6.jar ??????????????????????????? JDBC ?????????????? ????? ojdbc6.jar ?????????????? ?????????????????????????????????????? var OracleDataSource = Java.type("oracle.jdbc.pool.OracleDataSource"); var ods = new OracleDataSource(); ods.setURL("jdbc:oracle:thin:test/test@dbsrv:1521:orcl"); var conn = ods.getConnection(); var meta = conn.getMetaData(); print("JDBC driver version is " + meta.getDriverVersion()); ???????? JDBC ?????? jjs ????? -cp ?????????????????????? $ jjs -cp ojdbc6.jar version.js JDBC driver version is 11.2.0.3.0 ????????????????????????????? JAR ????? Main Class ?????????????? ????JDBC ???????????????????????????????? ?????????java -jar ojdbc6.jar ?? JDBC ?????????????????? $ java -jar ojdbc6.jar Oracle 11.2.0.3.0 JDBC 4.0 compiled with JDK6 on Thu_Jul_11_15:43:23_PDT_2013 #Default Connection Properties Resource #Fri May 30 10:37:32 JST 2014 ????? JAR ???????????? Main-Class ???????????? main(String[]) ??????????????????? Nashorn ?????????????? JAR ????? Main-Class ???? ?? Main-Class ???????????? jar ????? JAR ?????????META-INF/MANIFEST.MF ???????????????? Main-Class ??????? $ jar xf ojdbc6.jar $ grep Main-Class META-INF/MANIFEST.MF Main-Class: oracle.jdbc.OracleDriver Main-Class ? oracle.jdbc.OracleDriver ????????????? Main-Class ??? ??????? oracle.jdbc.OracleDriver ???? main ????? Nashorn ???????? $ jjs -cp ojdbc6.jar jjs> var OracleDriver = Java.type("oracle.jdbc.OracleDriver"); jjs> var StringArray = Java.type("java.lang.String[]"); jjs> OracleDriver.main(new StringArray(0)) Oracle 11.2.0.3.0 JDBC 4.0 compiled with JDK6 on Thu_Jul_11_15:43:23_PDT_2013 #Default Connection Properties Resource #Fri May 30 10:55:22 JST 2014 null jjs> java ????? JAR ??????????????????????????? Java ???????????????? Java.type() ??????? JavaClass ????????????????????? ??? Nashorn ????????????? JDBC ? Oracle DB ????????JAR ????? Main-Class ???????????????? ??? Oracle DB ????? SQL ???????????????

    Read the article

  • .NET WebRequest.PreAuthenticate not quite what it sounds like

    - by Rick Strahl
    I’ve run into the  problem a few times now: How to pre-authenticate .NET WebRequest calls doing an HTTP call to the server – essentially send authentication credentials on the very first request instead of waiting for a server challenge first? At first glance this sound like it should be easy: The .NET WebRequest object has a PreAuthenticate property which sounds like it should force authentication credentials to be sent on the first request. Looking at the MSDN example certainly looks like it does: http://msdn.microsoft.com/en-us/library/system.net.webrequest.preauthenticate.aspx Unfortunately the MSDN sample is wrong. As is the text of the Help topic which incorrectly leads you to believe that PreAuthenticate… wait for it - pre-authenticates. But it doesn’t allow you to set credentials that are sent on the first request. What this property actually does is quite different. It doesn’t send credentials on the first request but rather caches the credentials ONCE you have already authenticated once. Http Authentication is based on a challenge response mechanism typically where the client sends a request and the server responds with a 401 header requesting authentication. So the client sends a request like this: GET /wconnect/admin/wc.wc?_maintain~ShowStatus HTTP/1.1 Host: rasnote User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506) Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en,de;q=0.7,en-us;q=0.3 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Connection: keep-alive and the server responds with: HTTP/1.1 401 Unauthorized Cache-Control: private Content-Type: text/html; charset=utf-8 Server: Microsoft-IIS/7.5 WWW-Authenticate: basic realm=rasnote" X-AspNet-Version: 2.0.50727 WWW-Authenticate: Negotiate WWW-Authenticate: NTLM WWW-Authenticate: Basic realm="rasnote" X-Powered-By: ASP.NET Date: Tue, 27 Oct 2009 00:58:20 GMT Content-Length: 5163 plus the actual error message body. The client then is responsible for re-sending the current request with the authentication token information provided (in this case Basic Auth): GET /wconnect/admin/wc.wc?_maintain~ShowStatus HTTP/1.1 Host: rasnote User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506) Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en,de;q=0.7,en-us;q=0.3 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Connection: keep-alive Cookie: TimeTrakker=2HJ1998WH06696; WebLogCommentUser=Rick Strahl|http://www.west-wind.com/|[email protected]; WebStoreUser=b8bd0ed9 Authorization: Basic cgsf12aDpkc2ZhZG1zMA== Once the authorization info is sent the server responds with the actual page result. Now if you use WebRequest (or WebClient) the default behavior is to re-authenticate on every request that requires authorization. This means if you look in  Fiddler or some other HTTP client Proxy that captures requests you’ll see that each request re-authenticates: Here are two requests fired back to back: and you can see the 401 challenge, the 200 response for both requests. If you watch this same conversation between a browser and a server you’ll notice that the first 401 is also there but the subsequent 401 requests are not present. WebRequest.PreAuthenticate And this is precisely what the WebRequest.PreAuthenticate property does: It’s a caching mechanism that caches the connection credentials for a given domain in the active process and resends it on subsequent requests. It does not send credentials on the first request but it will cache credentials on subsequent requests after authentication has succeeded: string url = "http://rasnote/wconnect/admin/wc.wc?_maintain~ShowStatus"; HttpWebRequest req = HttpWebRequest.Create(url) as HttpWebRequest; req.PreAuthenticate = true; req.Credentials = new NetworkCredential("rick", "secret", "rasnote"); req.AuthenticationLevel = System.Net.Security.AuthenticationLevel.MutualAuthRequested; req.UserAgent = ": Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506)"; WebResponse resp = req.GetResponse(); resp.Close(); req = HttpWebRequest.Create(url) as HttpWebRequest; req.PreAuthenticate = true; req.Credentials = new NetworkCredential("rstrahl", "secret", "rasnote"); req.AuthenticationLevel = System.Net.Security.AuthenticationLevel.MutualAuthRequested; req.UserAgent = ": Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506)"; resp = req.GetResponse(); which results in the desired sequence: where only the first request doesn’t send credentials. This is quite useful as it saves quite a few round trips to the server – bascially it saves one auth request request for every authenticated request you make. In most scenarios I think you’d want to send these credentials this way but one downside to this is that there’s no way to log out the client. Since the client always sends the credentials once authenticated only an explicit operation ON THE SERVER can undo the credentials by forcing another login explicitly (ie. re-challenging with a forced 401 request). Forcing Basic Authentication Credentials on the first Request On a few occasions I’ve needed to send credentials on a first request – mainly to some oddball third party Web Services (why you’d want to use Basic Auth on a Web Service is beyond me – don’t ask but it’s not uncommon in my experience). This is true of certain services that are using Basic Authentication (especially some Apache based Web Services) and REQUIRE that the authentication is sent right from the first request. No challenge first. Ugly but there it is. Now the following works only with Basic Authentication because it’s pretty straight forward to create the Basic Authorization ‘token’ in code since it’s just an unencrypted encoding of the user name and password into base64. As you might guess this is totally unsecure and should only be used when using HTTPS/SSL connections (i’m not in this example so I can capture the Fiddler trace and my local machine doesn’t have a cert installed, but for production apps ALWAYS use SSL with basic auth). The idea is that you simply add the required Authorization header to the request on your own along with the authorization string that encodes the username and password: string url = "http://rasnote/wconnect/admin/wc.wc?_maintain~ShowStatus"; HttpWebRequest req = HttpWebRequest.Create(url) as HttpWebRequest; string user = "rick"; string pwd = "secret"; string domain = "www.west-wind.com"; string auth = "Basic " + Convert.ToBase64String(System.Text.Encoding.Default.GetBytes(user + ":" + pwd)); req.PreAuthenticate = true; req.AuthenticationLevel = System.Net.Security.AuthenticationLevel.MutualAuthRequested;req.Headers.Add("Authorization", auth); req.UserAgent = ": Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506)"; WebResponse resp = req.GetResponse(); resp.Close(); This works and causes the request to immediately send auth information to the server. However, this only works with Basic Auth because you can actually create the authentication credentials easily on the client because it’s essentially clear text. The same doesn’t work for Windows or Digest authentication since you can’t easily create the authentication token on the client and send it to the server. Another issue with this approach is that PreAuthenticate has no effect when you manually force the authentication. As far as Web Request is concerned it never sent the authentication information so it’s not actually caching the value any longer. If you run 3 requests in a row like this: string url = "http://rasnote/wconnect/admin/wc.wc?_maintain~ShowStatus"; HttpWebRequest req = HttpWebRequest.Create(url) as HttpWebRequest; string user = "ricks"; string pwd = "secret"; string domain = "www.west-wind.com"; string auth = "Basic " + Convert.ToBase64String(System.Text.Encoding.Default.GetBytes(user + ":" + pwd)); req.PreAuthenticate = true; req.Headers.Add("Authorization", auth); req.UserAgent = ": Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506)"; WebResponse resp = req.GetResponse(); resp.Close(); req = HttpWebRequest.Create(url) as HttpWebRequest; req.PreAuthenticate = true; req.Credentials = new NetworkCredential(user, pwd, domain); req.UserAgent = ": Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506)"; resp = req.GetResponse(); resp.Close(); req = HttpWebRequest.Create(url) as HttpWebRequest; req.PreAuthenticate = true; req.Credentials = new NetworkCredential(user, pwd, domain); req.UserAgent = ": Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506)"; resp = req.GetResponse(); you’ll find the trace looking like this: where the first request (the one we explicitly add the header to) authenticates, the second challenges, and any subsequent ones then use the PreAuthenticate credential caching. In effect you’ll end up with one extra 401 request in this scenario, which is still better than 401 challenges on each request. Getting Access to WebRequest in Classic .NET Web Service Clients If you’re running a classic .NET Web Service client (non-WCF) one issue with the above is how do you get access to the WebRequest to actually add the custom headers to do the custom Authentication described above? One easy way is to implement a partial class that allows you add headers with something like this: public partial class TaxService { protected NameValueCollection Headers = new NameValueCollection(); public void AddHttpHeader(string key, string value) { this.Headers.Add(key,value); } public void ClearHttpHeaders() { this.Headers.Clear(); } protected override WebRequest GetWebRequest(Uri uri) { HttpWebRequest request = (HttpWebRequest) base.GetWebRequest(uri); request.Headers.Add(this.Headers); return request; } } where TaxService is the name of the .NET generated proxy class. In code you can then call AddHttpHeader() anywhere to add additional headers which are sent as part of the GetWebRequest override. Nice and simple once you know where to hook it. For WCF there’s a bit more work involved by creating a message extension as described here: http://weblogs.asp.net/avnerk/archive/2006/04/26/Adding-custom-headers-to-every-WCF-call-_2D00_-a-solution.aspx. FWIW, I think that HTTP header manipulation should be readily available on any HTTP based Web Service client DIRECTLY without having to subclass or implement a special interface hook. But alas a little extra work is required in .NET to make this happen Not a Common Problem, but when it happens… This has been one of those issues that is really rare, but it’s bitten me on several occasions when dealing with oddball Web services – a couple of times in my own work interacting with various Web Services and a few times on customer projects that required interaction with credentials-first services. Since the servers determine the protocol, we don’t have a choice but to follow the protocol. Lovely following standards that implementers decide to ignore, isn’t it? :-}© Rick Strahl, West Wind Technologies, 2005-2010Posted in .NET  CSharp  Web Services  

    Read the article

  • How to configure LocalSessionFactoryBean to release connections after transaction end?

    - by peter
    I am testing an application (Spring 2.5, Hibernate 3.5.0 Beta, Atomikos 3.6.2, and Postgreql 8.4.2) with the configuration for the DAO listed below. The problem that I see is that the pool of 10 connections with the dataSource gets exhausted after the 10's transaction. I know 'hibernate.connection.release_mode' has no effect unless the session is obtained with openSession rather then using a contextual session. I am wandering if anyone has found a way to configure the LocalSessionFactoryBean to release connections after any transaction. Thank you Peter <bean id="dataSource" class="com.atomikos.jdbc.AtomikosDataSourceBean" init-method="init" destroy-method="close"> <property name="uniqueResourceName"><value>XADBMS</value></property> <property name="xaDataSourceClassName"> <value>org.postgresql.xa.PGXADataSource</value> </property> <property name="xaProperties"> <props> <prop key="databaseName">${jdbc.name}</prop> <prop key="serverName">${jdbc.server}</prop> <prop key="portNumber">${jdbc.port}</prop> <prop key="user">${jdbc.username}</prop> <prop key="password">${jdbc.password}</prop> </props> </property> <property name="poolSize"><value>10</value></property> </bean> <bean id="sessionFactory" class="org.springframework.orm.hibernate3.LocalSessionFactoryBean"> <property name="dataSource"> <ref bean="dataSource" /> </property> <property name="mappingResources"> <list> <value>Abc.hbm.xml</value> </list> </property> <property name="hibernateProperties"> <props> <prop key="hibernate.dialect">org.hibernate.dialect.PostgreSQLDialect</prop> <prop key="hibernate.show_sql">on</prop> <prop key="hibernate.format_sql">true</prop> <prop key="hibernate.connection.isolation">3</prop> <prop key="hibernate.current_session_context_class">jta</prop> <prop key="hibernate.transaction.factory_class">org.hibernate.transaction.JTATransactionFactory</prop> <prop key="hibernate.transaction.manager_lookup_class">com.atomikos.icatch.jta.hibernate3.TransactionManagerLookup</prop> <prop key="hibernate.connection.release_mode">auto</prop> <prop key="hibernate.transaction.auto_close_session">true</prop> </props> </property> </bean> <!-- Transaction definition here --> <bean id="userTransactionService" class="com.atomikos.icatch.config.UserTransactionServiceImp" init-method="init" destroy-method="shutdownForce"> <constructor-arg> <props> <prop key="com.atomikos.icatch.service"> com.atomikos.icatch.standalone.UserTransactionServiceFactory </prop> </props> </constructor-arg> </bean> <!-- Construct Atomikos UserTransactionManager, needed to configure Spring --> <bean id="AtomikosTransactionManager" class="com.atomikos.icatch.jta.UserTransactionManager" init-method="init" destroy-method="close" depends-on="userTransactionService"> <property name="forceShutdown" value="false" /> </bean> <!-- Also use Atomikos UserTransactionImp, needed to configure Spring --> <bean id="AtomikosUserTransaction" class="com.atomikos.icatch.jta.UserTransactionImp" depends-on="userTransactionService"> <property name="transactionTimeout" value="300" /> </bean> <!-- Configure the Spring framework to use JTA transactions from Atomikos --> <bean id="txManager" class="org.springframework.transaction.jta.JtaTransactionManager" depends-on="userTransactionService"> <property name="transactionManager" ref="AtomikosTransactionManager" /> <property name="userTransaction" ref="AtomikosUserTransaction" /> </bean> <!-- the transactional advice (what 'happens'; see the <aop:advisor/> bean below) --> <tx:advice id="txAdvice" transaction-manager="txManager"> <tx:attributes> <!-- all methods starting with 'get' are read-only --> <tx:method name="get*" read-only="true" propagation="REQUIRED"/> <!-- other methods use the default transaction settings (see below) --> <tx:method name="*" propagation="REQUIRED"/> </tx:attributes> </tx:advice> <aop:config> <aop:advisor pointcut="execution(* *.*.AbcDao.*(..))" advice-ref="txAdvice"/> </aop:config> <!-- DAO objects --> <bean id="abcDao" class="test.dao.impl.HibernateAbcDao" scope="singleton"> <property name="sessionFactory" ref="sessionFactory"/> </bean>

    Read the article

  • How Expedia Made My New Bride Cry

    - by Lance Robinson
    Tweet this? Email Expedia and ask them to give me and my new wife our honeymoon? When Expedia followed up their failure with our honeymoon trip with a complete and total lack of acknowledgement of any responsibility for the problem and endless loops of explaining the issue over and over again - I swore that they would make it right. When they brought my new bride to tears, I got an immediate and endless supply of motivation. I hope you will help me make them make it right by posting our story on Twitter, Facebook, your blog, on Expedia itself, and when talking to your friends in person about their own travel plans.   If you are considering using them now for an important trip - reconsider. Short summary: We arrived early for a flight - but Expedia had made a mistake with the data they supplied to JetBlue and Emirates, which resulted in us not being able to check in (one leg of our trip was missing)!  At the time of this post, three people (myself, my wife, and an exceptionally patient JetBlue employee named Mary) each spent hours on the phone with Expedia.  I myself spent right at 3 hours (according to iPhone records), Lauren spent an hour and a half or so, and poor Mary was probably on the phone for a good 3.5 hours.  This is after 5 hours total at the airport.  If you add up our phone time, that is nearly 8 hours of phone time over a 5 hour period with little or no help, stall tactics (?), run-around, denial, shifting of blame, and holding. Details below (times are approximate): First, my wife and I were married yesterday - June 18th, the 3 year anniversary of our first date. She is awesome. She is the nicest person I have ever known, a ton of fun, absolutely beautiful in every way. Ok enough mushy - here are the dirty details. 2:30 AM - Early Check-in Attempt - we attempted to check-in for our flight online. Some sort of technology error on website, instructed to checkin at desk. 4:30 AM - Arrive at airport. Try to check-in at kiosk, get the same error. We got to the JetBlue desk at RDU International Airport, where Mary helped us. Mary discovered that the Expedia provided itinerary does not match the Expedia provided tickets. We are informed that when that happens American, JetBlue, and others that use the same software cannot check you in for the flight because. Why? Because the itinerary was missing a leg of our flight! Basically we were not shown in the system as definitely being able to make it home. Mary called Expedia and was put on hold by their automated system. 4:55 AM - Mary, myself, and my brand new bride all waited for about 25 minutes when finally I decided I would make a call myself on my iPhone while Mary was on the airport phone. In their automated system, I chose "make a new reservation", thinking they might answer a little more quickly than "customer service". Not surprisingly I was connected to an Expedia person within 1 minute. They informed me that they would have to forward me to a customer service specialist. I explained to them that we were already on hold for that and had been for nearly half an hour, that we were going on our honeymoon and that our flight would be leaving soon - could they please help us. "Yes, I will help you". I hand the phone to JetBlue Mary who explains the situation 3 or 4 times. Obviously I couldn't hear both ends of the conversation at this point, but the Expedia person explained what the problem was by stating exactly what Mary had just spent 15 minutes explaining. Mary calmly confirms that this is the problem, and asks Expedia to re-issue the itinerary. Expedia tells Mary that they'll have to transfer her to customer service. Mary asks for someone specific so that we get an answer this time, and goes on hold. Mary get's connected, explains the situation, and then Mary's connection gets terminated. 5:10 AM - Mary calls back to the Expedia automated system again, and we wait for about 5 minutes on hold this time before I pick up my iPhone and call Expedia again myself. Again I go to sales, a person picks up the phone in less than a minute. I explain the situation and let them know that we are now very close to missing our flight for our honeymoon, could they please help us. "Yes, I will help you". Again I give the phone to Mary who provides them with a call back number in case we get disconnected again and explains the situation again. More back and forth with Expedia doing nothing but repeating the same questions, Mary answering the questions with the same information she provided in the original explanation, and Expedia simply restating the problem. Mary again asks them to re-issue the itinerary, and explains that doing so will fix the problem. Expedia again repeats the problem instead of fixing it, and Mary's connection gets terminated. 5:20 AM - Mary again calls back to Expedia. My beautiful bride also calls on her own phone. At this point she is struggling to hold back her tears, stumbling through an explanation of all that has happened and that we are about to miss our flight. Please help us. "Yes, I will help". My beautiful bride's connection gets terminated. Ok, maybe this disconnection isn't an accident. We've now been disconnected 3 times on two different phones. 5:45 AM - I walk away and pleadingly beg a person to help me. They "escalate" the issue to "Rosy" (sp?) at Expedia. I go through the whole song and dance again with Rosy, who gives me the same treatment Mary was given. Rosy blames JetBlue for now having the correct data. Meanwhile Mary is on the phone with Emirates Air (the airline for the second leg of our trip), who agrees with JetBlue that Expedia's data isn't up to date. We are informed by two airport employees that issues like this with Expedia are not uncommon, and that the fix is simple. On the phone iwth Rosy, I ask her to re-issue the itinerary because we are about to miss our flight. She again explains the problem to me. At this point, I am standing at the window, pleading with Rosy to help us get to our honeymoon, watching our airplane. Then our airplane leaves without us. 6:03 AM - At this point we have missed our flight. Re-issuing the itinerary is no longer a solution. I ask Rosy to start from the beginning and work us up a new trip. She says that she cannot do that. She says that she needs to talk to JetBlue and Emirates and find out why we cannot check-in for our flight. I remind Rosy that our flight has already left - I just watched it taxi away - it no longer matters why (not to mention the fact that we already knew why, and have known why since 4:30 AM), and have known the solution since 4:30 AM. Rosy, can you please book a new trip? Yes, but it will cost $400. Excuse me? Now you can, but it will cost ME to fix your mistake? Rosy says that she can escalate the situation to her supervisor but that will take 1.5 hours. 6:15 AM - I told Rosy that if they had re-issued the itinerary as JetBlue asked (at 4:30 AM), my new wife and I might be on the airplane now instead of dealing with this on the phone and missing the beginning (and how much more?) of our honeymoon. Rosy said that it was not necessary to re-issue the itinerary. Out of curiosity, i asked Rosy if there was some financial burden on them to re-issue the itinerary. "No", said Rosy. I asked her if it was a large time burden on Expedia to re-issue the itinerary. "No", said Rosy. I directly asked Rosy: Why wouldn't Expedia have re-issued the itinerary when JetBlue asked? No answer. I asked Rosy: If you had re-issued the itinerary at 4:30, isn't it possible that I would be on that flight right now? She actually surprised me by answering "Yes" to that question. So I pointed out that it followed that Expedia was responsible for the fact that we missed out flight, and she immediately went into more about how the problem was with JetBlue - but now it was ALSO an Emirates Air problem as well. I tell Rosy to go ahead and escalate the issue again, and please call me back in that 1.5 hours (which how is about 1 hour and 10 minutes away). 6:30 AM - I start tweeting my frustration with iPhone. It's now pretty much impossible for us to make it to The Maldives by 3pm, which is the time at which we would need to arrive in order to be allowed service to the actual island where we are staying. Expedia has now given me the run-around for 2 hours, caused me to miss my flight, and worst of all caused my amazing new wife Lauren to miss our honeymoon. You think I was mad? No. Furious. Its ok to make mistakes - but to refuse to fix them and to ruin our honeymoon? No, not ok, Expedia. I swore right then that Expedia would make this right. 7:45 AM - JetBlue mary is still talking her tail off to other people in JetBlue and Emirates Air. Mary works it out so that if Expedia simply books a new trip, JetBlue and Emirates will both waive all the fees. Now we just have to convince Expedia to fix their mistake and get us on our way! Around this time Expedia Rosy calls me back! I inform her of the excellent work of JetBlue Mary - that JetBlue and Emirates both will waive the fees so Expedia can fix their mistake and get us going on our way. She says that she sees documentation of this in her system and that she needs to put me on hold "for 1 to 10 minutes" to talk to Emirates Air (why I'm not exactly sure). I say ok. 8:45 AM - After an hour on hold, Rosy comes on the line and asks me to hold more. I ask her to call me back. 9:35 AM - I put down the iPhone Twitter app and picks up the laptop. You think I made some noise with my iPhone? Heh 11:25 AM - Expedia follows me and sends a canned "We're sorry, DM us the details".  If you look at their Twitter feed, 16 out of the most recent 20 tweets are exactly the same canned response.  The other 4?  Ads.  Um - #MultiFAIL? To Expedia:  You now have had (as explained above) 8 hours of 3 different people explaining our situation, you know the email address of our Expedia account, you know my web blog, you know my Twitter address, you know my phone number.  You also know how upset you have made both me and my new bride by treating us with such a ... non caring, scripted, uncooperative, argumentative, and possibly even deceitful manner.  In the wise words of the great Kenan Thompson of SNL: "FIX IT!".  And no, I'm NOT going away until you make this right. Period. 11:45 AM - Expedia corporate office called.  The woman I spoke to was very nice and apologetic.  She listened to me tell the story again, she says she understands the problem and she is going to work to resolve it.  I don't have any details on what exactly that resolution might me, she said she will call me back in 20 minutes.  She found out about the problem via Twitter.  Thank you Twitter, and all of you who helped.  Hopefully social media will win my wife and I our honeymoon, and hopefully Expedia will encourage their customer service teams treat their customers properly. 12:22 PM - Spoke to Fran again from Expedia corporate office.  She has a flight for us tonight.  She is booking it now.  We will arrive at our honeymoon destination of beautiful Veligandu Island Resort only 1 day late.  She cannot confirm today, but she expects that Expedia will pay for the lost honeymoon night.  Thank you everyone for your help.  I will reflect more on this whole situation and confirm its resolution after our flight is 100% confirmed.  For now, I'm going to take a breather and go kiss my wonderful wife! 1:50 PM - Have not yet received the promised phone call.  We did receive an email with a new itinerary for a flight but the booking is not for specific seats, so there is no guarantee that my wife and I will be able to sit together.  With the original booking I carefully selected our seats for every segment of our trip.  I decided to call into the phone number that Fran from the Expedia corporate office gave me.  Its automated voice system identified itself as "Tier 3 Support".  I am currently still on hold with them, I have not gotten through to a human yet. 1:55 PM - Fran from Expedia called me back.  She confirmed us as booked.  She called the airlines to confirm.  Unfortunately, Expedia was unwilling or unable to allow us any type of seat selection.  It is possible that i won't get to sit next to the woman I married less than a day ago on our 40 total hours of flight time (there and back).  In addition, our seats could be the worst seats on the planes, with no reclining seat back or right next to the restroom.  Despite this fact (which in my opinion is huge), the horrible inconvenience, the hours at the airport, and the negative Internet publicity that Expedia is receiving, Expedia declined to offer us any kind of upgrade or to mark us as SFU (suitable for upgrade).  Since they didn't offer - I asked, and was rejected.  I am grateful to finally be heading in the right direction, but not only did Expedia horribly botch this job from the very beginning, they followed that botch job with near zero customer service, followed by a verbally apologetic but otherwise half-hearted resolution.  If this works out favorably for us, great.  If not - I'm not done making noise, Expedia.  You owe us, and I expect you to make it right.  You haven't quite done that yet. Thanks - Thank you to Twitter.  Thanks to all those who sympathize with us and helped us get the attention of Expedia, since three people (one of them an airline employee) using Expedia's normal channels of communication for many hours didn't help.  Thanks especially to my PowerShell and Sharepoint friends, my local friends, and those connectors who encouraged me and spread my story. 5:15 PM - Love Wins - After all this, Lauren and I are exhausted.  We both took a short nap, and when we woke up we talked about the last 24 hours.  It was a big, amazing, story-filled 24 hours.  I said that Expedia won, but Lauren said no.  She pointed out how lucky we are.  We are in love and married.  We have wonderful family and friends.  We are both hard-working successful people who love what they do.  We get to go to an amazing exotic destination for our honeymoon like Veligandu in The Maldives...  That's a lot of good.  Expedia didn't win.  This was (is) a big loss for Expedia.  It is a public blemish for all to see.  But Lauren and I did win, big time.  Expedia may not have made things right - but things are right for us.  Post in progress... I will relay any further comments (or lack of) from Expedia soon, as well as an update on confirmation of their repayment of our lost resort room rates.  I'll also post a picture of us on our honeymoon as soon as I can!

    Read the article

  • Getting started with Exchange Web Services 2010

    - by Adam Tuttle
    I've been tasked with writing a SOAP web-service in .Net to be middleware between EWS2010 and an application server that previously used WebDAV to connect to Exchange. (As I understand it, WebDAV is going away with EWS2010, so the application server will no longer be able to connect as it previously did, and it is exponentially harder to connect to EWS without WebDAV. The theory is that doing it in .Net should be easier than anything else... Right?!) My end goal is to be able to get and create/update email, calendar items, contacts, and to-do list items for a specified Exchange account. (Deleting is not currently necessary, but I may build it in for future consideration, if it's easy enough). I was originally given some sample code, which did in fact work, but I quickly realized that it was outdated. The types and classes used appear nowhere in the current documentation. For example, the method used to create a connection to the Exchange server was: ExchangeService svc = new ExchangeService(); svc.Credentials = new WebCredentials(AuthEmailAddress, AuthEmailPassword); svc.AutodiscoverUrl(AutoDiscoverEmailAddress); For what it's worth, this was using an assembly that came with the sample code: Microsoft.Exchange.WebServices.dll ("MEWS"). Before I realized that this wasn't the current standard way to accomplish the connection, and it worked, I tried to build on it and add a method to create calendar items, which I copied from here: static void CreateAppointment(ExchangeServiceBinding esb) { // Create the appointment. CalendarItemType appointment = new CalendarItemType(); ... } Right away, I'm confronted with the difference between ExchangeService and ExchangeServiceBinding ("ESB"); so I started Googling to try and figure out how to get an ESB definition so that the CreateAppointment method will compile. I found this blog post that explains how to generate a proxy class from a WSDL, which I did. Unfortunately, this caused some conflicts where types that were defined in the original Assembly, Microsoft.Exchange.WebServices.dll (that came with the sample code) overlapped with Types in my new EWS.dll assembly (which I compiled from the code generated from the services.wsdl provided by the Exchange server). I excluded the MEWS assembly, which only made things worse. I went from a handful of errors and warnings to 25 errors and 2,510 warnings. All kinds of types and methods were not found. Something is clearly wrong, here. So I went back on the hunt. I found instructions on adding service references and web references (i.e. the extra steps it takes in VS2008), and I think I'm back on the right track. I removed (actually, for now, just excluded) all previous assemblies I had been trying; and I added a service reference for https://my.exchange-server.com/ews/services.wsdl Now I'm down to just 1 error and 1 warning. Warning: The element 'transport' cannot contain child element 'extendedProtectionPolicy' because the parent element's content model is empty. This is in reference to a change that was made to web.config when I added the service reference; and I just found a fix for that here on SO. I've commented that section out as indicated, and it did make the warning go away, so woot for that. The error hasn't been so easy to get around, though: Error: The type or namespace name 'ExchangeService' could not be found (are you missing a using directive or an assembly reference?) This is in reference to the function I was using to create the EWS connection, called by each of the web methods: private ExchangeService getService(String AutoDiscoverEmailAddress, String AuthEmailAddress, String AuthEmailPassword) { ExchangeService svc = new ExchangeService(); svc.Credentials = new WebCredentials(AuthEmailAddress, AuthEmailPassword); svc.AutodiscoverUrl(AutoDiscoverEmailAddress); return svc; } This function worked perfectly with the MEWS assembly from the sample code, but the ExchangeService type is no longer available. (Nor is ExchangeServiceBinding, that was the first thing I checked.) At this point, since I'm not following any directions from the documentation (I couldn't find anywhere in the documentation that said to add a service reference to your Exchange server's services.wsdl -- but that does seem to be the best/farthest I've gotten so far), I feel like I'm flying blind. I know I need to figure out whatever it is that should replace ExchangeService / ExchangeServiceBinding, implement that, and then work through whatever errors crop up as a result of that switch... But I have no idea how to do that, or where to look for how to do it. Googling "ExchangeService" and "ExchangeServiceBinding" only seem to lead back to outdated blog posts and MSDN, neither of which has proven terribly helpful thus far. Help me, Obi-Wan, you're my only hope!

    Read the article

  • REST API Help in Rails

    - by dannymcc
    Hi Everyone, I am trying to get some information posted using our accountancy package (FreeAgentCentral) using their API via a GEM. http://github.com/aaronrussell/freeagent_api/ I have the following code to get it working (supposedly): Kase Controller def create @kase = Kase.new(params[:kase]) @company = Company.find(params[:kase][:company_id]) @kase = @company.kases.create!(params[:kase]) respond_to do |format| if @kase.save UserMailer.deliver_makeakase("[email protected]", "Highrise", @kase) @kase.create_freeagent_project(current_user) #flash[:notice] = 'Case was successfully created.' flash[:notice] = fading_flash_message("Case was successfully created & sent to Highrise.", 5) format.html { redirect_to(@kase) } format.xml { render :xml => @kase, :status => :created, :location => @kase } else format.html { render :action => "new" } format.xml { render :xml => @kase.errors, :status => :unprocessable_entity } end end end To save you looking through, the important part is: @kase.create_freeagent_project(current_user) Kase Model # FreeAgent API Project Create # Required attribues # :contact_id # :name # :payment_term_in_days # :billing_basis # must be 1, 7, 7.5, or 8 # :budget_units # must be Hours, Days, or Monetary # :status # must be Active or Completed def create_freeagent_project(current_user) p = Freeagent::Project.create( :contact_id => 0, :name => "#{jobno} - #{highrisesubject}", :payment_terms_in_days => 5, :billing_basis => 1, :budget_units => 'Hours', :status => 'Active' ) user = Freeagent::User.find_by_email(current_user.email) Freeagent::Timeslip.create( :project_id => p.id, :user_id => user.id, :hours => 1, :new_task => 'Setup', :dated_on => Time.now ) end lib/freeagent_api.rb require 'rubygems' gem 'activeresource', '< 3.0.0.beta1' require 'active_resource' module Freeagent class << self def authenticate(options) Base.authenticate(options) end end class Error < StandardError; end class Base < ActiveResource::Base def self.authenticate(options) self.site = "https://#{options[:domain]}" self.user = options[:username] self.password = options[:password] end end # Company class Company def self.invoice_timeline InvoiceTimeline.find :all, :from => '/company/invoice_timeline.xml' end def self.tax_timeline TaxTimeline.find :all, :from => '/company/tax_timeline.xml' end end class InvoiceTimeline < Base self.prefix = '/company/' end class TaxTimeline < Base self.prefix = '/company/' end # Contacts class Contact < Base end # Projects class Project < Base def invoices Invoice.find :all, :from => "/projects/#{id}/invoices.xml" end def timeslips Timeslip.find :all, :from => "/projects/#{id}/timeslips.xml" end end # Tasks - Complete class Task < Base self.prefix = '/projects/:project_id/' end # Invoices - Complete class Invoice < Base def mark_as_draft connection.put("/invoices/#{id}/mark_as_draft.xml", encode, self.class.headers).tap do |response| load_attributes_from_response(response) end end def mark_as_sent connection.put("/invoices/#{id}/mark_as_sent.xml", encode, self.class.headers).tap do |response| load_attributes_from_response(response) end end def mark_as_cancelled connection.put("/invoices/#{id}/mark_as_cancelled.xml", encode, self.class.headers).tap do |response| load_attributes_from_response(response) end end end # Invoice items - Complete class InvoiceItem < Base self.prefix = '/invoices/:invoice_id/' end # Timeslips class Timeslip < Base def self.find(*arguments) scope = arguments.slice!(0) options = arguments.slice!(0) || {} if options[:params] && options[:params][:from] && options[:params][:to] options[:params][:view] = options[:params][:from]+'_'+options[:params][:to] options[:params].delete(:from) options[:params].delete(:to) end case scope when :all then find_every(options) when :first then find_every(options).first when :last then find_every(options).last when :one then find_one(options) else find_single(scope, options) end end end # Users class User < Base self.prefix = '/company/' def self.find_by_email(email) users = User.find :all users.each do |u| u.email == email ? (return u) : next end raise Error, "No user matches that email!" end end end config/initializers/freeagent.rb Freeagent.authenticate({ :domain => 'XXXXX.freeagentcentral.com', :username => '[email protected]', :password => 'XXXXXX' }) The above render the following error when trying to create a new Case and send the details to FreeAgent: ActiveResource::ResourceNotFound in KasesController#create Failed with 404 Not Found and ActiveResource::ResourceNotFound (Failed with 404 Not Found): app/models/kase.rb:56:in `create_freeagent_project' app/controllers/kases_controller.rb:96:in `create' app/controllers/kases_controller.rb:93:in `create' Rendered rescues/_trace (176.5ms) Rendered rescues/_request_and_response (1.1ms) Rendering rescues/layout (internal_server_error) If anyone can shed any light on this problem it would be greatly appreciated! Thanks, Danny

    Read the article

  • Looking into Enum Support in Entity Framework 5.0 Code First

    - by nikolaosk
    In this post I will show you with a hands-on demo the enum support that is available in Visual Studio 2012, .Net Framework 4.5 and Entity Framework 5.0. You can have a look at this post to learn about the support of multilple diagrams per model that exists in Entity Framework 5.0. We will demonstrate this with a step by step example. I will use Visual Studio 2012 Ultimate. You can also use Visual Studio 2012 Express Edition. Before I move on to the actual demo I must say that in EF 5.0 an enumeration can have the following types. Byte Int16 Int32 Int64 Sbyte Obviously I cannot go into much detail on what EF is and what it does. I will give again a short introduction.The .Net framework provides support for Object Relational Mapping through EF. So EF is a an ORM tool and it is now the main data access technology that microsoft works on. I use it quite extensively in my projects. Through EF we have many things out of the box provided for us. We have the automatic generation of SQL code.It maps relational data to strongly types objects.All the changes made to the objects in the memory are persisted in a transactional way back to the data store. You can find in this post an example on how to use the Entity Framework to retrieve data from an SQL Server Database using the "Database/Schema First" approach. In this approach we make all the changes at the database level and then we update the model with those changes. In this post you can see an example on how to use the "Model First" approach when working with ASP.Net and the Entity Framework. This model was firstly introduced in EF version 4.0 and we could start with a blank model and then create a database from that model.When we made changes to the model , we could recreate the database from the new model. You can search in my blog, because I have posted many posts regarding ASP.Net and EF. I assume you have a working knowledge of C# and know a few things about EF. The Code First approach is the more code-centric than the other two. Basically we write POCO classes and then we persist to a database using something called DBContext. Code First relies on DbContext. We create 2,3 classes (e.g Person,Product) with properties and then these classes interact with the DbContext class. We can create a new database based upon our POCOS classes and have tables generated from those classes.We do not have an .edmx file in this approach.By using this approach we can write much easier unit tests. DbContext is a new context class and is smaller,lightweight wrapper for the main context class which is ObjectContext (Schema First and Model First). Let's begin building our sample application. 1) Launch Visual Studio. Create an ASP.Net Empty Web application. Choose an appropriate name for your application. 2) Add a web form, default.aspx page to the application. 3) Now we need to make sure the Entity Framework is included in our project. Go to Solution Explorer, right-click on the project name.Then select Manage NuGet Packages...In the Manage NuGet Packages dialog, select the Online tab and choose the EntityFramework package.Finally click Install. Have a look at the picture below   4) Create a new folder. Name it CodeFirst . 5) Add a new item in your application, a class file. Name it Footballer.cs. This is going to be a simple POCO class.Place it in the CodeFirst folder. The code follows public class Footballer { public int FootballerID { get; set; } public string FirstName { get; set; } public string LastName { get; set; } public double Weight { get; set; } public double Height { get; set; } public DateTime JoinedTheClub { get; set; } public int Age { get; set; } public List<Training> Trainings { get; set; } public FootballPositions Positions { get; set; } }    Now I am going to define my enum values in the same class file, Footballer.cs    public enum FootballPositions    {        Defender,        Midfielder,        Striker    } 6) Now we need to create the Training class. Add a new class to your application and place it in the CodeFirst folder.The code for the class follows.     public class Training     {         public int TrainingID { get; set; }         public int TrainingDuration { get; set; }         public string TrainingLocation { get; set; }     }   7) Then we need to create a context class that inherits from DbContext.Add a new class to the CodeFirst folder.Name it FootballerDBContext.Now that we have the entity classes created, we must let the model know.I will have to use the DbSet<T> property.The code for this class follows       public class FootballerDBContext:DbContext     {         public DbSet<Footballer> Footballers { get; set; }         public DbSet<Training> Trainings { get; set; }     } Do not forget to add  (using System.Data.Entity;) in the beginning of the class file 8) We must take care of the connection string. It is very easy to create one in the web.config.It does not matter that we do not have a database yet.When we run the DbContext and query against it,it will use a connection string in the web.config and will create the database based on the classes. In my case the connection string inside the web.config, looks like this      <connectionStrings>    <add name="CodeFirstDBContext"  connectionString="server=.\SqlExpress;integrated security=true;"  providerName="System.Data.SqlClient"/>                       </connectionStrings>   9) Now it is time to create Linq to Entities queries to retrieve data from the database . Add a new class to your application in the CodeFirst folder.Name the file DALfootballer.cs We will create a simple public method to retrieve the footballers. The code for the class follows public class DALfootballer     {         FootballerDBContext ctx = new FootballerDBContext();         public List<Footballer> GetFootballers()         {             var query = from player in ctx.Footballers where player.FirstName=="Jamie" select player;             return query.ToList();         }     }   10) Place a GridView control on the Default.aspx page and leave the default name.Add an ObjectDataSource control on the Default.aspx page and leave the default name. Set the DatasourceID property of the GridView control to the ID of the ObjectDataSource control.(DataSourceID="ObjectDataSource1" ). Let's configure the ObjectDataSource control. Click on the smart tag item of the ObjectDataSource control and select Configure Data Source. In the Wizzard that pops up select the DALFootballer class and then in the next step choose the GetFootballers() method.Click Finish to complete the steps of the wizzard. Build your application.  11)  Let's create an Insert method in order to insert data into the tables. I will create an Insert() method and for simplicity reasons I will place it in the Default.aspx.cs file. private void Insert()        {            var footballers = new List<Footballer>            {                new Footballer {                                 FirstName = "Steven",LastName="Gerrard", Height=1.85, Weight=85,Age=32, JoinedTheClub=DateTime.Parse("12/12/1999"),Positions=FootballPositions.Midfielder,                Trainings = new List<Training>                             {                                     new Training {TrainingDuration = 3, TrainingLocation="MelWood"},                    new Training {TrainingDuration = 2, TrainingLocation="Anfield"},                    new Training {TrainingDuration = 2, TrainingLocation="MelWood"},                }                            },                            new Footballer {                                  FirstName = "Jamie",LastName="Garragher", Height=1.89, Weight=89,Age=34, JoinedTheClub=DateTime.Parse("12/02/2000"),Positions=FootballPositions.Defender,                Trainings = new List<Training>                                             {                                 new Training {TrainingDuration = 3, TrainingLocation="MelWood"},                new Training {TrainingDuration = 5, TrainingLocation="Anfield"},                new Training {TrainingDuration = 6, TrainingLocation="Anfield"},                }                           }                    };            footballers.ForEach(foot => ctx.Footballers.Add(foot));            ctx.SaveChanges();        }   12) In the Page_Load() event handling routine I called the Insert() method.        protected void Page_Load(object sender, EventArgs e)        {                   Insert();                }  13) Run your application and you will see that the following result,hopefully. You can see clearly that the data is returned along with the enum value.  14) You must have also a look at the database.Launch SSMS and see the database and its objects (data) created from EF Code First.Have a look at the picture below. Hopefully now you have seen the support that exists in EF 5.0 for enums.Hope it helps !!!

    Read the article

  • How can I transfer output that appears on the console and format it so that it appears on a web page

    - by lojayna
    package collabsoft.backlog_reports.c4; import java.sql.CallableStatement; import java.sql.Connection; import java.sql.DriverManager; import java.sql.ResultSet; import java.sql.ResultSetMetaData; import java.sql.Statement; //import collabsoft.backlog_reports.c4.Report; public class Report { private Connection con; public Report(){ connectUsingJDBC(); } public static void main(String args[]){ Report dc = new Report(); dc.reviewMeeting(6, 8, 10); dc.createReport("dede",100); //dc.viewReport(100); // dc.custRent(3344,123,22,11-11-2009); } /** the following method is used to connect to the database **/ public void connectUsingJDBC() { // This is the name of the ODBC data source String dataSourceName = "Simple_DB"; try { // loading the driver in the memory Class.forName("sun.jdbc.odbc.JdbcOdbcDriver"); // This is the connection URL String dbURL = "jdbc:odbc:" + dataSourceName; con = DriverManager.getConnection("jdbc:mysql://localhost:3306/Collabsoft","root",""); // This line is used to print the name of the driver and it would throw an exception if a problem occured System.out.println("User connected using driver: " + con.getMetaData().getDriverName()); //Addcustomer(con,1111,"aaa","aaa","aa","aam","111","2222","111"); //rentedMovies(con); //executePreparedStatement(con); //executeCallableStatement(con); //executeBatch(con); } catch (Exception e) { e.printStackTrace(); } } /** *this code is to link the SQL code with the java for the task *as an admin I should be able to create a report of a review meeting including notes, tasks and users *i will take the task id and user id and note id that will be needed to be added in the review *meeting report and i will display the information related to these ida **/ public void reviewMeeting(int taskID, int userID, int noteID)// law el proc bt return table { try{ CallableStatement callableStatement = con.prepareCall("{CALL report_review_meeting(?,?,?)}"); callableStatement.setInt(1,taskID); callableStatement.setInt(2,userID); callableStatement.setInt(3,noteID); ResultSet resultSet = callableStatement.executeQuery(); // or executeupdate() or updateQuery ResultSetMetaData rsm = resultSet.getMetaData(); int numOfColumns = rsm.getColumnCount(); System.out.println("lojayna"); while (resultSet.next()) { System.out.println("New Row:"); for (int i = 1; i <= numOfColumns; i++) System.out.print(rsm.getColumnName(i) + ": " + resultSet.getObject(i) + " "); System.out.println(); } } catch(Exception e) { System.out.println("E"); } } ////////////////////////////////// ///////////////////////////////// public void allproject(int projID)// law el proc bt return table { try{ CallableStatement callableStatement = con.prepareCall("{CALL all_project(?)}"); callableStatement.setInt(1,projID); //callableStatement.setInt(2,userID); //callableStatement.setInt(3,noteID); ResultSet resultSet = callableStatement.executeQuery(); // or executeupdate() or updateQuery ResultSetMetaData rsm = resultSet.getMetaData(); int numOfColumns = rsm.getColumnCount(); System.out.println("lojayna"); while (resultSet.next()) { System.out.println("New Row:"); for (int i = 1; i <= numOfColumns; i++) System.out.print(rsm.getColumnName(i) + ": " + resultSet.getObject(i) + " "); System.out.println(); } } catch(Exception e) { System.out.println("E"); } } /////////////////////////////// /** * here i take the event id and i take a string report and then * i relate the report with the event **/ public void createReport(String report,int E_ID )// law el proc bt return table { try{ Statement st = con.createStatement(); st.executeUpdate("UPDATE e_vent SET e_vent.report=report WHERE e_vent.E_ID= E_ID;"); /* CallableStatement callableStatement = con.prepareCall("{CALL Create_report(?,?)}"); callableStatement.setString(1,report); callableStatement.setInt(2,E_ID); ResultSet resultSet = callableStatement.executeQuery(); // or executeupdate() or updateQuery ResultSetMetaData rsm = resultSet.getMetaData(); int numOfColumns = rsm.getColumnCount(); System.out.println("lojayna"); while (resultSet.next()) { System.out.println("New Row:"); for (int i = 1; i <= numOfColumns; i++) System.out.print(rsm.getColumnName(i) + ": " + resultSet.getObject(i) + " "); System.out.println(); }*/ } catch(Exception e) { System.out.println("E"); System.out.println(e); } } /** *in the following method i view the report of the event having the ID eventID **/ public void viewReport(int eventID)// law el proc bt return table { try{ CallableStatement callableStatement = con.prepareCall("{CALL view_report(?)}"); callableStatement.setInt(1,eventID); ResultSet resultSet = callableStatement.executeQuery(); // or executeupdate() or updateQuery ResultSetMetaData rsm = resultSet.getMetaData(); int numOfColumns = rsm.getColumnCount(); System.out.println("lojayna"); while (resultSet.next()) { System.out.println("New Row:"); for (int i = 1; i <= numOfColumns; i++) System.out.print(rsm.getColumnName(i) + ": " + resultSet.getObject(i) + " "); System.out.println(); } } catch(Exception e) { System.out.println("E"); } } } // the result of these methods is being showed on the console , i am using WIcket and i want it 2 be showed on the web how is that done ?!

    Read the article

  • Testing Entity Framework applications, pt. 3: NDbUnit

    - by Thomas Weller
    This is the third of a three part series that deals with the issue of faking test data in the context of a legacy app that was built with Microsoft's Entity Framework (EF) on top of an MS SQL Server database – a scenario that can be found very often. Please read the first part for a description of the sample application, a discussion of some general aspects of unit testing in a database context, and of some more specific aspects of the here discussed EF/MSSQL combination. Lately, I wondered how you would ‘mock’ the data layer of a legacy application, when this data layer is made up of an MS Entity Framework (EF) model in combination with a MS SQL Server database. Originally, this question came up in the context of how you could enable higher-level integration tests (automated UI tests, to be exact) for a legacy application that uses this EF/MSSQL combo as its data store mechanism – a not so uncommon scenario. The question sparked my interest, and I decided to dive into it somewhat deeper. What I've found out is, in short, that it's not very easy and straightforward to do it – but it can be done. The two strategies that are best suited to fit the bill involve using either the (commercial) Typemock Isolator tool or the (free) NDbUnit framework. The use of Typemock was discussed in the previous post, this post now will present the NDbUnit approach... NDbUnit is an Apache 2.0-licensed open-source project, and like so many other Nxxx tools and frameworks, it is basically a C#/.NET port of the corresponding Java version (DbUnit namely). In short, it helps you in flexibly managing the state of a database in that it lets you easily perform basic operations (like e.g. Insert, Delete, Refresh, DeleteAll)  against your database and, most notably, lets you feed it with data from external xml files. Let's have a look at how things can be done with the help of this framework. Preparing the test data Compared to Typemock, using NDbUnit implies a totally different approach to meet our testing needs.  So the here described testing scenario requires an instance of an SQL Server database in operation, and it also means that the Entity Framework model that sits on top of this database is completely unaffected. First things first: For its interactions with the database, NDbUnit relies on a .NET Dataset xsd file. See Step 1 of their Quick Start Guide for a description of how to create one. With this prerequisite in place then, the test fixture's setup code could look something like this: [TestFixture, TestsOn(typeof(PersonRepository))] [Metadata("NDbUnit Quickstart URL",           "http://code.google.com/p/ndbunit/wiki/QuickStartGuide")] [Description("Uses the NDbUnit library to provide test data to a local database.")] public class PersonRepositoryFixture {     #region Constants     private const string XmlSchema = @"..\..\TestData\School.xsd";     #endregion // Constants     #region Fields     private SchoolEntities _schoolContext;     private PersonRepository _personRepository;     private INDbUnitTest _database;     #endregion // Fields     #region Setup/TearDown     [FixtureSetUp]     public void FixtureSetUp()     {         var connectionString = ConfigurationManager.ConnectionStrings["School_Test"].ConnectionString;         _database = new SqlDbUnitTest(connectionString);         _database.ReadXmlSchema(XmlSchema);         var entityConnectionStringBuilder = new EntityConnectionStringBuilder         {             Metadata = "res://*/School.csdl|res://*/School.ssdl|res://*/School.msl",             Provider = "System.Data.SqlClient",             ProviderConnectionString = connectionString         };         _schoolContext = new SchoolEntities(entityConnectionStringBuilder.ConnectionString);         _personRepository = new PersonRepository(this._schoolContext);     }     [FixtureTearDown]     public void FixtureTearDown()     {         _database.PerformDbOperation(DbOperationFlag.DeleteAll);         _schoolContext.Dispose();     }     ...  As you can see, there is slightly more fixture setup code involved if your tests are using NDbUnit to provide the test data: Because we're dealing with a physical database instance here, we first need to pick up the test-specific connection string from the test assemblies' App.config, then initialize an NDbUnit helper object with this connection along with the provided xsd file, and also set up the SchoolEntities and the PersonRepository instances accordingly. The _database field (an instance of the INdUnitTest interface) will be our single access point to the underlying database: We use it to perform all the required operations against the data store. To have a flexible mechanism to easily insert data into the database, we can write a helper method like this: private void InsertTestData(params string[] dataFileNames) {     _database.PerformDbOperation(DbOperationFlag.DeleteAll);     if (dataFileNames == null)     {         return;     }     try     {         foreach (string fileName in dataFileNames)         {             if (!File.Exists(fileName))             {                 throw new FileNotFoundException(Path.GetFullPath(fileName));             }             _database.ReadXml(fileName);             _database.PerformDbOperation(DbOperationFlag.InsertIdentity);         }     }     catch     {         _database.PerformDbOperation(DbOperationFlag.DeleteAll);         throw;     } } This lets us easily insert test data from xml files, in any number and in a  controlled order (which is important because we eventually must fulfill referential constraints, or we must account for some other stuff that imposes a specific ordering on data insertion). Again, as with Typemock, I won't go into API details here. - Unfortunately, there isn't too much documentation for NDbUnit anyway, other than the already mentioned Quick Start Guide (and the source code itself, of course) - a not so uncommon problem with smaller Open Source Projects. Last not least, we need to provide the required test data in xml form. A snippet for data from the People table might look like this, for example: <?xml version="1.0" encoding="utf-8" ?> <School xmlns="http://tempuri.org/School.xsd">   <Person>     <PersonID>1</PersonID>     <LastName>Abercrombie</LastName>     <FirstName>Kim</FirstName>     <HireDate>1995-03-11T00:00:00</HireDate>   </Person>   <Person>     <PersonID>2</PersonID>     <LastName>Barzdukas</LastName>     <FirstName>Gytis</FirstName>     <EnrollmentDate>2005-09-01T00:00:00</EnrollmentDate>   </Person>   <Person>     ... You can also have data from various tables in one single xml file, if that's appropriate for you (but beware of the already mentioned ordering issues). It's true that your test assembly may end up with dozens of such xml files, each containing quite a big amount of text data. But because the files are of very low complexity, and with the help of a little bit of Copy/Paste and Excel magic, this appears to be well manageable. Executing some basic tests Here are some of the possible tests that can be written with the above preparations in place: private const string People = @"..\..\TestData\School.People.xml"; ... [Test, MultipleAsserts, TestsOn("PersonRepository.GetNameList")] public void GetNameList_ListOrdering_ReturnsTheExpectedFullNames() {     InsertTestData(People);     List<string> names =         _personRepository.GetNameList(NameOrdering.List);     Assert.Count(34, names);     Assert.AreEqual("Abercrombie, Kim", names.First());     Assert.AreEqual("Zheng, Roger", names.Last()); } [Test, MultipleAsserts, TestsOn("PersonRepository.GetNameList")] [DependsOn("RemovePerson_CalledOnce_DecreasesCountByOne")] public void GetNameList_NormalOrdering_ReturnsTheExpectedFullNames() {     InsertTestData(People);     List<string> names =         _personRepository.GetNameList(NameOrdering.Normal);     Assert.Count(34, names);     Assert.AreEqual("Alexandra Walker", names.First());     Assert.AreEqual("Yan Li", names.Last()); } [Test, TestsOn("PersonRepository.AddPerson")] public void AddPerson_CalledOnce_IncreasesCountByOne() {     InsertTestData(People);     int count = _personRepository.Count;     _personRepository.AddPerson(new Person { FirstName = "Thomas", LastName = "Weller" });     Assert.AreEqual(count + 1, _personRepository.Count); } [Test, TestsOn("PersonRepository.RemovePerson")] public void RemovePerson_CalledOnce_DecreasesCountByOne() {     InsertTestData(People);     int count = _personRepository.Count;     _personRepository.RemovePerson(new Person { PersonID = 33 });     Assert.AreEqual(count - 1, _personRepository.Count); } Not much difference here compared to the corresponding Typemock versions, except that we had to do a bit more preparational work (and also it was harder to get the required knowledge). But this picture changes quite dramatically if we look at some more demanding test cases: Ok, and what if things are becoming somewhat more complex? Tests like the above ones represent the 'easy' scenarios. They may account for the biggest portion of real-world use cases of the application, and they are important to make sure that it is generally sound. But usually, all these nasty little bugs originate from the more complex parts of our code, or they occur when something goes wrong. So, for a testing strategy to be of real practical use, it is especially important to see how easy or difficult it is to mimick a scenario which represents a more complex or exceptional case. The following test, for example, deals with the case that there is some sort of invalid input from the caller: [Test, MultipleAsserts, TestsOn("PersonRepository.GetCourseMembers")] [Row(null, typeof(ArgumentNullException))] [Row("", typeof(ArgumentException))] [Row("NotExistingCourse", typeof(ArgumentException))] public void GetCourseMembers_WithGivenVariousInvalidValues_Throws(string courseTitle, Type expectedInnerExceptionType) {     var exception = Assert.Throws<RepositoryException>(() =>                                 _personRepository.GetCourseMembers(courseTitle));     Assert.IsInstanceOfType(expectedInnerExceptionType, exception.InnerException); } Apparently, this test doesn't need an 'Arrange' part at all (see here for the same test with the Typemock tool). It acts just like any other client code, and all the required business logic comes from the database itself. This doesn't always necessarily mean that there is less complexity, but only that the complexity happens in a different part of your test resources (in the xml files namely, where you sometimes have to spend a lot of effort for carefully preparing the required test data). Another example, which relies on an underlying 1-n relationship, might be this: [Test, MultipleAsserts, TestsOn("PersonRepository.GetCourseMembers")] public void GetCourseMembers_WhenGivenAnExistingCourse_ReturnsListOfStudents() {     InsertTestData(People, Course, Department, StudentGrade);     List<Person> persons = _personRepository.GetCourseMembers("Macroeconomics");     Assert.Count(4, persons);     Assert.ForAll(         persons,         @p => new[] { 10, 11, 12, 14 }.Contains(@p.PersonID),         "Person has none of the expected IDs."); } If you compare this test to its corresponding Typemock version, you immediately see that the test itself is much simpler, easier to read, and thus much more intention-revealing. The complexity here lies hidden behind the call to the InsertTestData() helper method and the content of the used xml files with the test data. And also note that you might have to provide additional data which are not even directly relevant to your test, but are required only to fulfill some integrity needs of the underlying database. Conclusion The first thing to notice when comparing the NDbUnit approach to its Typemock counterpart obviously deals with performance: Of course, NDbUnit is much slower than Typemock. Technically,  it doesn't even make sense to compare the two tools. But practically, it may well play a role and could or could not be an issue, depending on how much tests you have of this kind, how often you run them, and what role they play in your development cycle. Also, because the dataset from the required xsd file must fully match the database schema (even in parts that otherwise wouldn't be relevant to you), it can be quite cumbersome to be in a team where different people are working with the database in parallel. My personal experience is – as already said in the first part – that Typemock gives you a better development experience in a 'dynamic' scenario (when you're working in some kind of TDD-style, you're oftentimes executing the tests from your dev box, and your database schema changes frequently), whereas the NDbUnit approach is a good and solid solution in more 'static' development scenarios (when you need to execute the tests less frequently or only on a separate build server, and/or the underlying database schema can be kept relatively stable), for example some variations of higher-level integration or User-Acceptance tests. But in any case, opening Entity Framework based applications for testing requires a fair amount of resources, planning, and preparational work – it's definitely not the kind of stuff that you would call 'easy to test'. Hopefully, future versions of EF will take testing concerns into account. Otherwise, I don't see too much of a future for the framework in the long run, even though it's quite popular at the moment... The sample solution A sample solution (VS 2010) with the code from this article series is available via my Bitbucket account from here (Bitbucket is a hosting site for Mercurial repositories. The repositories may also be accessed with the Git and Subversion SCMs - consult the documentation for details. In addition, it is possible to download the solution simply as a zipped archive – via the 'get source' button on the very right.). The solution contains some more tests against the PersonRepository class, which are not shown here. Also, it contains database scripts to create and fill the School sample database. To compile and run, the solution expects the Gallio/MbUnit framework to be installed (which is free and can be downloaded from here), the NDbUnit framework (which is also free and can be downloaded from here), and the Typemock Isolator tool (a fully functional 30day-trial is available here). Moreover, you will need an instance of the Microsoft SQL Server DBMS, and you will have to adapt the connection strings in the test projects App.config files accordingly.

    Read the article

  • Hi, I have a C hashing routine which is behaving strangely?

    - by aks
    Hi, In this hashing routine: 1.) I am able to add strings. 2.) I am able to view my added strings. 3.) When i try to add a duplicate string, it throws me an error saying already present. 4.) But, when i try to delete the same string which is already present in hash table, then the lookup_routine calls hash function to get an index. At this time, it throws a different hash index to the one it was already added. Hence, my delete routine is failing? I am able to understand the reason why for same string, hash fucntion calculates a different index each time (whereas the same logic works in view hash table routine)? Can someone help me? This is the Output, i am getting: $ ./a Press 1 to add an element to the hashtable Press 2 to delete an element from the hashtable Press 3 to search the hashtable Press 4 to view the hashtable Press 5 to exit Please enter your choice: 1 Please enter the string :gaura enters in add_string DEBUG purpose in hash function: str passed = gaura Hashval returned in hash func= 1 hashval = 1 enters in lookup_string str in lookup_string = gaura DEBUG purpose in hash function: str passed = gaura Hashval returned in hash func= 1 hashval = 1 DEBUG ERROR :element not found in lookup string DEBUG Purpose NULL Inserting... DEBUG1 : enters here hashval = 1 String added successfully Press 1 to add an element to the hashtable Press 2 to delete an element from the hashtable Press 3 to search the hashtable Press 4 to view the hashtable Press 5 to exit Please enter your choice: 1 Please enter the string :ayu enters in add_string DEBUG purpose in hash function: str passed = ayu Hashval returned in hash func= 1 hashval = 1 enters in lookup_string str in lookup_string = ayu DEBUG purpose in hash function: str passed = ayu Hashval returned in hash func= 1 hashval = 1 returns NULL in lookup_string DEBUG Purpose NULL Inserting... DEBUG2 : enters here hashval = 1 String added successfully Press 1 to add an element to the hashtable Press 2 to delete an element from the hashtable Press 3 to search the hashtable Press 4 to view the hashtable Press 5 to exit Please enter your choice: 1 Please enter the string :gaurava enters in add_string DEBUG purpose in hash function: str passed = gaurava Hashval returned in hash func= 7 hashval = 7 enters in lookup_string str in lookup_string = gaurava DEBUG purpose in hash function: str passed = gaurava Hashval returned in hash func= 7 hashval = 7 DEBUG ERROR :element not found in lookup string DEBUG Purpose NULL Inserting... DEBUG1 : enters here hashval = 7 String added successfully Press 1 to add an element to the hashtable Press 2 to delete an element from the hashtable Press 3 to search the hashtable Press 4 to view the hashtable Press 5 to exit Please enter your choice: 4 Index : i = 1 String = gaura ayu Index : i = 7 String = gaurava Press 1 to add an element to the hashtable Press 2 to delete an element from the hashtable Press 3 to search the hashtable Press 4 to view the hashtable Press 5 to exit Please enter your choice: 2 Please enter the string you want to delete :gaura String entered = gaura enters in delete_string DEBUG purpose in hash function: str passed = gaura Hashval returned in hash func= 0 hashval = 0 enters in lookup_string str in lookup_string = gaura DEBUG purpose in hash function: str passed = gaura Hashval returned in hash func= 0 hashval = 0 DEBUG ERROR :element not found in lookup string DEBUG Purpose Item not present. So, cannot be deleted Item found in list: Deletion failed Press 1 to add an element to the hashtable Press 2 to delete an element from the hashtable Press 3 to search the hashtable Press 4 to view the hashtable Press 5 to exit Please enter your choice: My routine is pasted below: include include struct list { char *string; struct list *next; }; struct hash_table { int size; /* the size of the table */ struct list *table; / the table elements */ }; struct hash_table * hashtable = NULL; struct hash_table *create_hash_table(int size) { struct hash_table *new_table; int i; if (size<1) return NULL; /* invalid size for table */ /* Attempt to allocate memory for the table structure */ if ((new_table = malloc(sizeof(struct hash_table))) == NULL) { return NULL; } /* Attempt to allocate memory for the table itself */ if ((new_table->table = malloc(sizeof(struct list *) * size)) == NULL) { return NULL; } /* Initialize the elements of the table */ for(i=0; i<size; i++) new_table->table[i] = '\0'; /* Set the table's size */ new_table->size = size; return new_table; } unsigned int hash(struct hash_table *hashtable, char *str) { printf("\n DEBUG purpose in hash function:\n"); printf("\n str passed = %s", str); unsigned int hashval = 0; int i = 0; for(; *str != '\0'; str++) { hashval += str[i]; i++; } hashval = hashval % 10; printf("\n Hashval returned in hash func= %d", hashval); return hashval; } struct list *lookup_string(struct hash_table *hashtable, char *str) { printf("\n enters in lookup_string \n"); printf("\n str in lookup_string = %s",str); struct list * new_list; unsigned int hashval = hash(hashtable, str); printf("\n hashval = %d \n", hashval); if(hashtable->table[hashval] == NULL) { printf("\n DEBUG ERROR :element not found in lookup string \n"); return NULL; } /* Go to the correct list based on the hash value and see if str is * in the list. If it is, return return a pointer to the list element. * If it isn't, the item isn't in the table, so return NULL. */ for(new_list = hashtable->table[hashval]; new_list != NULL;new_list = new_list->next) { if (strcmp(str, new_list->string) == 0) return new_list; } printf("\n returns NULL in lookup_string \n"); return NULL; } int add_string(struct hash_table *hashtable, char *str) { printf("\n enters in add_string \n"); struct list *new_list; struct list *current_list; unsigned int hashval = hash(hashtable, str); printf("\n hashval = %d", hashval); /* Attempt to allocate memory for list */ if ((new_list = malloc(sizeof(struct list))) == NULL) { printf("\n enters here \n"); return 1; } /* Does item already exist? */ current_list = lookup_string(hashtable, str); if (current_list == NULL) { printf("\n DEBUG Purpose \n"); printf("\n NULL \n"); } /* item already exists, don't insert it again. */ if (current_list != NULL) { printf("\n Item already present...\n"); return 2; } /* Insert into list */ printf("\n Inserting...\n"); new_list->string = strdup(str); new_list->next = NULL; //new_list->next = hashtable->table[hashval]; if(hashtable->table[hashval] == NULL) { printf("\n DEBUG1 : enters here \n"); printf("\n hashval = %d", hashval); hashtable->table[hashval] = new_list; } else { printf("\n DEBUG2 : enters here \n"); printf("\n hashval = %d", hashval); struct list * temp_list = hashtable->table[hashval]; while(temp_list->next!=NULL) temp_list = temp_list->next; temp_list->next = new_list; // hashtable->table[hashval] = new_list; } return 0; } int delete_string(struct hash_table *hashtable, char *str) { printf("\n enters in delete_string \n"); struct list *new_list; struct list *current_list; unsigned int hashval = hash(hashtable, str); printf("\n hashval = %d", hashval); /* Does item already exist? */ current_list = lookup_string(hashtable, str); if (current_list == NULL) { printf("\n DEBUG Purpose \n"); printf("\n Item not present. So, cannot be deleted \n"); return 1; } /* item exists, delete it. */ if (current_list != NULL) { struct list * temp = hashtable->table[hashval]; if(strcmp(temp->string,str) == 0) { if(temp->next == NULL) { hashtable->table[hashval] = NULL; free(temp); } else { hashtable->table[hashval] = temp->next; free(temp); } } else { struct list * temp1; while(temp->next != NULL) { temp1 = temp; if(strcmp(temp->string, str) == 0) { break; } else { temp = temp->next; } } if(temp->next == NULL) { temp1->next = NULL; free(temp); } else { temp1->next = temp->next; free(temp); } } } return 0; } void free_table(struct hash_table *hashtable) { int i; struct list *new_list, *temp_list; if (hashtable==NULL) return; /* Free the memory for every item in the table, including the * strings themselves. */ for(i=0; i<hashtable->size; i++) { new_list = hashtable->table[i]; while(new_list!=NULL) { temp_list = new_list; new_list = new_list->next; free(temp_list->string); free(temp_list); } } /* Free the table itself */ free(hashtable->table); free(hashtable); } void view_hashtable(struct hash_table * hashtable) { int i = 0; if(hashtable == NULL) return; for(i =0; i < hashtable->size; i++) { if((hashtable->table[i] == 0) || (strcmp(hashtable->table[i]->string, "*") == 0)) { continue; } printf(" Index : i = %d\t String = %s",i, hashtable->table[i]->string); struct list * temp = hashtable->table[i]->next; while(temp != NULL) { printf("\t %s",temp->string); temp = temp->next; } printf("\n"); } } int main() { hashtable = create_hash_table(10); if(hashtable == NULL) { printf("\n Memory allocation failure during creation of hash table \n"); return 0; } int flag = 1; while(flag) { int choice; printf("\n Press 1 to add an element to the hashtable\n"); printf("\n Press 2 to delete an element from the hashtable\n"); printf("\n Press 3 to search the hashtable\n"); printf("\n Press 4 to view the hashtable\n"); printf("\n Press 5 to exit \n"); printf("\n Please enter your choice: "); scanf("%d",&choice); if(choice == 5) flag = 0; else if(choice == 1) { char str[20]; printf("\n Please enter the string :"); scanf("%s",&str); int i; i = add_string(hashtable,str); if(i == 1) { printf("\n Memory allocation failure:Choice 1 \n"); return 0; } else if(i == 2) { printf("\n String already prsent in hash table : Couldnot add it again\n"); return 0; } else { printf("\n String added successfully \n"); } } else if(choice == 2) { int i; struct list * temp_list; char str[20]; printf("\n Please enter the string you want to delete :"); scanf("%s",&str); printf("\n String entered = %s", str); i = delete_string(hashtable,str); if(i == 0) { printf("\n Item found in list: Deletion success \n"); } else printf("\n Item found in list: Deletion failed \n"); } else if(choice == 3) { struct list * temp_list; char str[20]; printf("\n Please enter the string :"); scanf("%s",&str); temp_list = lookup_string(hashtable,str); if(!temp_list) { printf("\n Item not found in list: Deletion failed \n"); return 0; } printf("\n Item found: Present in Hash Table \n"); } else if(choice == 4) { view_hashtable(hashtable); } else if(choice == 5) { printf("\n Exiting ...."); return 0; } else printf("\n Invalid choice:"); }; free_table(hashtable); return 0; }

    Read the article

  • i want to show the result of my code on a web page because it is being showed on a console??

    - by lojayna
    package collabsoft.backlog_reports.c4; import java.sql.CallableStatement; import java.sql.Connection; import java.sql.DriverManager; import java.sql.ResultSet; import java.sql.ResultSetMetaData; import java.sql.Statement; //import collabsoft.backlog_reports.c4.Report; public class Report { private Connection con; public Report(){ connectUsingJDBC(); } public static void main(String args[]){ Report dc = new Report(); dc.reviewMeeting(6, 8, 10); dc.createReport("dede",100); //dc.viewReport(100); // dc.custRent(3344,123,22,11-11-2009); } /** the following method is used to connect to the database **/ public void connectUsingJDBC() { // This is the name of the ODBC data source String dataSourceName = "Simple_DB"; try { // loading the driver in the memory Class.forName("sun.jdbc.odbc.JdbcOdbcDriver"); // This is the connection URL String dbURL = "jdbc:odbc:" + dataSourceName; con = DriverManager.getConnection("jdbc:mysql://localhost:3306/Collabsoft","root",""); // This line is used to print the name of the driver and it would throw an exception if a problem occured System.out.println("User connected using driver: " + con.getMetaData().getDriverName()); //Addcustomer(con,1111,"aaa","aaa","aa","aam","111","2222","111"); //rentedMovies(con); //executePreparedStatement(con); //executeCallableStatement(con); //executeBatch(con); } catch (Exception e) { e.printStackTrace(); } } /** *this code is to link the SQL code with the java for the task *as an admin I should be able to create a report of a review meeting including notes, tasks and users *i will take the task id and user id and note id that will be needed to be added in the review meeting report and i will display the information related to these ida */ public void reviewMeeting(int taskID, int userID, int noteID)// law el proc bt return table { try{ CallableStatement callableStatement = con.prepareCall("{CALL report_review_meeting(?,?,?)}"); callableStatement.setInt(1,taskID); callableStatement.setInt(2,userID); callableStatement.setInt(3,noteID); ResultSet resultSet = callableStatement.executeQuery(); // or executeupdate() or updateQuery ResultSetMetaData rsm = resultSet.getMetaData(); int numOfColumns = rsm.getColumnCount(); System.out.println("lojayna"); while (resultSet.next()) { System.out.println("New Row:"); for (int i = 1; i <= numOfColumns; i++) System.out.print(rsm.getColumnName(i) + ": " + resultSet.getObject(i) + " "); System.out.println(); } } catch(Exception e) { System.out.println("E"); } } ////////////////////////////////// ///////////////////////////////// public void allproject(int projID)// law el proc bt return table { try{ CallableStatement callableStatement = con.prepareCall("{CALL all_project(?)}"); callableStatement.setInt(1,projID); //callableStatement.setInt(2,userID); //callableStatement.setInt(3,noteID); ResultSet resultSet = callableStatement.executeQuery(); // or executeupdate() or updateQuery ResultSetMetaData rsm = resultSet.getMetaData(); int numOfColumns = rsm.getColumnCount(); System.out.println("lojayna"); while (resultSet.next()) { System.out.println("New Row:"); for (int i = 1; i <= numOfColumns; i++) System.out.print(rsm.getColumnName(i) + ": " + resultSet.getObject(i) + " "); System.out.println(); } } catch(Exception e) { System.out.println("E"); } } /////////////////////////////// /** * here i take the event id and i take a string report and then * i relate the report with the event **/ public void createReport(String report,int E_ID )// law el proc bt return table { try{ Statement st = con.createStatement(); st.executeUpdate("UPDATE e_vent SET e_vent.report=report WHERE e_vent.E_ID= E_ID;"); /* CallableStatement callableStatement = con.prepareCall("{CALL Create_report(?,?)}"); callableStatement.setString(1,report); callableStatement.setInt(2,E_ID); ResultSet resultSet = callableStatement.executeQuery(); // or executeupdate() or updateQuery ResultSetMetaData rsm = resultSet.getMetaData(); int numOfColumns = rsm.getColumnCount(); System.out.println("lojayna"); while (resultSet.next()) { System.out.println("New Row:"); for (int i = 1; i <= numOfColumns; i++) System.out.print(rsm.getColumnName(i) + ": " + resultSet.getObject(i) + " "); System.out.println(); }*/ } catch(Exception e) { System.out.println("E"); System.out.println(e); } } /** in the following method i view the report of the event having the ID eventID */ public void viewReport(int eventID)// law el proc bt return table { try{ CallableStatement callableStatement = con.prepareCall("{CALL view_report(?)}"); callableStatement.setInt(1,eventID); ResultSet resultSet = callableStatement.executeQuery(); // or executeupdate() or updateQuery ResultSetMetaData rsm = resultSet.getMetaData(); int numOfColumns = rsm.getColumnCount(); System.out.println("lojayna"); while (resultSet.next()) { System.out.println("New Row:"); for (int i = 1; i <= numOfColumns; i++) System.out.print(rsm.getColumnName(i) + ": " + resultSet.getObject(i) + " "); System.out.println(); } } catch(Exception e) { System.out.println("E"); } } } // the result of these methods is being showed on the console , i am using WIcket and i want it 2 be showed on the web how is that done ?! thnxxx

    Read the article

  • Atomikos with Hibernate will exhaust db connections

    - by peter
    I am testing an application (Spring 2.5, Hibernate 3.5.0 Beta, Atomikos 3.6.2, and Postgreql 8.4.2) with the configuration for the DAO listed below. The problem that I see is that the pool of 10 connections with the dataSource gets exhausted after the 10's transaction. I know 'hibernate.connection.release_mode' has no effect unless the session is obtained with openSession rather then using a contextual session. I am wandering if anyone has found a way to instruct atomikos code to release connections after any transaction. Thank you Peter <bean id="dataSource" class="com.atomikos.jdbc.AtomikosDataSourceBean" init-method="init" destroy-method="close"> <property name="uniqueResourceName"><value>XADBMS</value></property> <property name="xaDataSourceClassName"> <value>org.postgresql.xa.PGXADataSource</value> </property> <property name="xaProperties"> <props> <prop key="databaseName">${jdbc.name}</prop> <prop key="serverName">${jdbc.server}</prop> <prop key="portNumber">${jdbc.port}</prop> <prop key="user">${jdbc.username}</prop> <prop key="password">${jdbc.password}</prop> </props> </property> <property name="poolSize"><value>10</value></property> </bean> <bean id="sessionFactory" class="org.springframework.orm.hibernate3.LocalSessionFactoryBean"> <property name="dataSource"> <ref bean="dataSource" /> </property> <property name="mappingResources"> <list> <value>Abc.hbm.xml</value> </list> </property> <property name="hibernateProperties"> <props> <prop key="hibernate.dialect">org.hibernate.dialect.PostgreSQLDialect</prop> <prop key="hibernate.show_sql">on</prop> <prop key="hibernate.format_sql">true</prop> <prop key="hibernate.connection.isolation">3</prop> <prop key="hibernate.current_session_context_class">jta</prop> <prop key="hibernate.transaction.factory_class">org.hibernate.transaction.JTATransactionFactory</prop> <prop key="hibernate.transaction.manager_lookup_class">com.atomikos.icatch.jta.hibernate3.TransactionManagerLookup</prop> <prop key="hibernate.connection.release_mode">auto</prop> <prop key="hibernate.current_session_context_class">org.hibernate.context.JTASessionContext</prop> <prop key="hibernate.transaction.auto_close_session">true</prop> </props> </property> </bean> <!-- Transaction definition here --> <bean id="userTransactionService" class="com.atomikos.icatch.config.UserTransactionServiceImp" init-method="init" destroy-method="shutdownForce"> <constructor-arg> <props> <prop key="com.atomikos.icatch.service"> com.atomikos.icatch.standalone.UserTransactionServiceFactory </prop> </props> </constructor-arg> </bean> <!-- Construct Atomikos UserTransactionManager, needed to configure Spring --> <bean id="AtomikosTransactionManager" class="com.atomikos.icatch.jta.UserTransactionManager" init-method="init" destroy-method="close" depends-on="userTransactionService"> <property name="forceShutdown" value="false" /> </bean> <!-- Also use Atomikos UserTransactionImp, needed to configure Spring --> <bean id="AtomikosUserTransaction" class="com.atomikos.icatch.jta.UserTransactionImp" depends-on="userTransactionService"> <property name="transactionTimeout" value="300" /> </bean> <!-- Configure the Spring framework to use JTA transactions from Atomikos --> <bean id="txManager" class="org.springframework.transaction.jta.JtaTransactionManager" depends-on="userTransactionService"> <property name="transactionManager" ref="AtomikosTransactionManager" /> <property name="userTransaction" ref="AtomikosUserTransaction" /> </bean> <!-- the transactional advice (what 'happens'; see the <aop:advisor/> bean below) --> <tx:advice id="txAdvice" transaction-manager="txManager"> <tx:attributes> <!-- all methods starting with 'get' are read-only --> <tx:method name="get*" read-only="true" propagation="REQUIRED"/> <!-- other methods use the default transaction settings (see below) --> <tx:method name="*" propagation="REQUIRED"/> </tx:attributes> </tx:advice> <aop:config> <aop:advisor pointcut="execution(* *.*.AbcDao.*(..))" advice-ref="txAdvice"/> </aop:config> <!-- DAO objects --> <bean id="abcDao" class="test.dao.impl.HibernateAbcDao" scope="singleton"> <property name="sessionFactory" ref="sessionFactory"/> </bean>

    Read the article

< Previous Page | 396 397 398 399 400 401 402 403 404 405 406 407  | Next Page >