Search Results

Search found 18401 results on 737 pages for 'oracle customer hub'.

Page 717/737 | < Previous Page | 713 714 715 716 717 718 719 720 721 722 723 724  | Next Page >

  • The Enterprise is a Curmudgeon

    - by John K. Hines
    Working in an enterprise environment is a unique challenge.  There's a lot more to software development than developing software.  A project lead or Scrum Master has to manage personalities and intra-team politics, has to manage accomplishing the task at hand while creating the opportunities and a reputation for handling desirable future work, has to create a competent, happy team that actually delivers while being careful not to burn bridges or hurt feelings outside the team.  Which makes me feel surprised to read advice like: " The enterprise should figure out what is likely to work best for itself and try to use it." - Ken Schwaber, The Enterprise and Scrum. The enterprises I have experience with are fundamentally unable to be self-reflective.  It's like asking a Roman gladiator if he'd like to carve out a little space in the arena for some silent meditation.  I'm currently wondering how compatible Scrum is with the top-down hierarchy of life in a large organization.  Specifically, manufacturing-mindset, fixed-release, harmony-valuing large organizations.  Now I understand why Agile can be a better fit for companies without much organizational inertia. Recently I've talked with nearly two dozen software professionals and their managers about Scrum and Agile.  I've become convinced that a developer, team, organization, or enterprise can be Agile without using Scrum.  But I'm not sure about what process would be the best fit, in general, for an enterprise that wants to become Agile.  It's possible I should read more than just the introduction to Ken's book. I do feel prepared to answer some of the questions I had asked in a previous post: How can Agile practices (including but not limited to Scrum) be adopted in situations where the highest-placed managers in a company demand software within extremely aggressive deadlines? Answer: In a very limited capacity at the individual level.  The situation here is that the senior management of this company values any software release more than it values developer well-being, end-user experience, or software quality.  Only if the developing organization is given an immediate refactoring opportunity does this sort of development make sense to a person who values sustainable software.   How can Agile practices be adopted by teams that do not perform a continuous cycle of new development, such as those whose sole purpose is to reproduce and debug customer issues? Answer: It depends.  For Scrum in particular, I don't believe Scrum is meant to manage unpredictable work.  While you can easily adopt XP practices for bug fixing, the project-management aspects of Scrum require some predictability.  My question here was meant toward those who want to apply Scrum to non-development teams.  In some cases it works, in others it does not. How can a team measure if its development efforts are both Agile and employ sound engineering practices? Answer: I'm currently leaning toward measuring these independently.  The Agile Principles are a terrific way to measure if a software team is agile.  Sound engineering practices are those practices which help developers meet the principles.  I think Scrum is being mistakenly applied as an engineering practice when it is essentially a project management practice.  In my opinion, XP and Lean are examples of good engineering practices. How can Agile be explained in an accurate way that describes its benefits to sceptical developers and/or revenue-focused non-developers? Answer: Agile techniques will result in higher-quality, lower-cost software development.  This comes primarily from finding defects earlier in the development cycle.  If there are individual developers who do not want to collaborate, write unit tests, or refactor, then these are simply developers who are either working in an area where adding these techniques will not add value (i.e. they are an expert) or they are a developer who is satisfied with the status quo.  In the first case they should be left alone.  In the second case, the results of Agile should be demonstrated by other developers who are willing to receive recognition for their efforts.  It all comes down to individuals, doesn't it?  If you're working in an organization whose Agile adoption consists exclusively of Scrum, consider ways to form individual Agile teams to demonstrate its benefits.  These can even be virtual teams that span people across org-chart boundaries.  Once you can measure real value, whether it's Scrum, Lean, or something else, people will follow.  Even the curmudgeons.

    Read the article

  • Use your own domain email and tired of SPAM? SPAMfighter FTW

    - by Dave Campbell
    I wouldn't post this if I hadn't tried it... and I paid for it myself, so don't anybody be thinking I'm reviewing something someone sent me! Long ago and far away I got very tired of local ISPs and 2nd phone lines and took the plunge and got hooked up to cable... yeah I know the 2nd phone line concept may be hard for everyone to understand, but that's how it was in 'the old days'. To avoid having to change email addresses all the time, I decided to buy a domain name, get minimal hosting, and use that for all email into the house. That way if I changed providers, all the email addresses wouldn't have to change. Of course, about a dozen domains later, I have LOTS of pop email addresses and even an exchange address to my client's server... times have changed. What also has changed is the fact that we get SPAM... 'back in the day' when I was a beta tester for the first ISP in Phoenix, someone tried sending an ad to all of us, and what he got in return for his trouble was a bunch of core dumps that locked up his email... if you don't know what a core dump is, ask your grandfather. But in today's world, we're all much more civilized than that, and as with many things, the criminals seem to have much more rights than we do, so we get inundated with email offering all sorts of wild schemes that you'd have to be brain-dead to accept, but yet... if people weren't accepting them, they'd stop sending them. I keep hoping that survival of the smartest would weed out the mental midgets that respond and then the jumk email stop, but that hasn't happened yet anymore than finding high-quality hearing aids at the checkout line of Safeway because of all the dimwits playing music too loud inside their car... but that's another whole topic and I digress. So what's the solution for all the spam? And I mean *all*... on that old personal email address, I am now getting over 150 spam messages a day! Yes I know that's why God invented the delete key, but I took it on as a challenge, and it's a matter of principle... why should I switch email addresses, or convert from [email protected] to something else, or have all my email filtered through some service just because some A-Hole somewhere has a site up trying to phish Ma & Pa Kettle (ask your grandfather about that too) out of their retirement money? Well... I got an email from my cousin the other day while I was writing yet another email rule, and there was a banner on the bottom of his email that said he was protected by SPAMfighter. SPAMfighter huh.... so I took a look at their site, and found yet one more of the supposed tools to help us. But... I read that they're a Microsoft Gold Partner... and that doesn't come lightly... so I took a gamble and here's what I found: I installed it, and had to do a couple things: 1) SPAMfighter stuffed the SPAMfighter folder into my client's exchange address... I deleted it, made a new SPAMfighter folder where I wanted it to go, then in the SPAMfighter Clients settings for Outlook, I told it to put all spam there. 2) It didn't seem to be doing anything. There's a ribbon button that you can select "Block", and I did that, wondering if I was 'training' it, but it wasn't picking up duplicates 3) I sent email to support, and wrote a post on the forum (not to self: reply to that post). By the time the folks from the home office responded, it was the next day, and first up, SPAMfighter knocked down everything that came through when Outlook opend... two thumbs up! I disabled my 'garbage collection' rule from Outlook, and told Outlook not to use the junk folder thinking it was interfering. 4) Day 2 seemed to go about like Day 1... but I hung in there. 5) Day 3 is now a whole new day... I had left Outlook open and hadn't looked at the PC since sometime late yesterday afternoon, and when I looked this morning, *every bit* of spam was in the SPAMfighter folder!! I'm a new paying customer After watching SPAMfighter work this morning, I've purchased a 1-year license, and I now can sit and watch as emails come in and disappear from my inbox into the SPAMfighter folder. No more continual tweaking of the rules. I've got SPAMfighter set to 'Very Hard' filtering... personally I'd rather pull the few real emails out of the SPAMfighter folder than pull spam out of the real folders. Yes this is simply another way of using the delete key, but you know what? ... it feels good :) Here's a screenshot of the stats after just about 48 hours of being onboard: Note that all the ones blocked by me were during Day 1 and 2... I've blocked none today, and everything is blocked. Stay in the 'Light!

    Read the article

  • Synergy - easy share of keyboard and mouse between multiple computers

    Did you ever have the urge to share one set of keyboard and mouse between multiple machines? If so, please read on... Using multiple machines Honestly, as a software craftsman it is my daily business to run multiple machines - either physical or virtual - to be able to solve my customers' requirements. Recent hardware equipment allows this very easily. For laptops it's a no-brainer to attach a second or even a third screen in order to extend your native display. This works quite handy and in my case I used to attached two additional screens - one via HD15 connector, the other via HDMI. But... as it's a laptop and therefore a mobile unit there are slight restrictions. Detaching and re-attaching all cables when changing locations is one of them but hardware limitations, too. After all, it's a laptop and not a workstation. I guess, that anyone working in IT (or ICT) has more than one machine at their workplace or their home office and at least I find it quite annoying to have multiple sets of keyboard and mouse conquering my remaining space on my desk. Despite the ugly looks of all those cables and whatsoever 'chaos of distraction' I prefer a more clean solution and working environment. This allows me to actually focus on my work and tasks to do rather than to worry about choosing the right combination of keyboard/mouse. My current workplace is a patch work of various pieces of hardware (approx. 2-3 years): DIY desktop on Ubuntu 12.04 64-bit, Core2 Duo (E7400, 2.8GHz), 4GB RAM, 2x 250GB HDD, nVidia GPU 512MB Dell Inspiron 1525 on Windows 8 64-bit, 4GB RAM, 200GB HDD HP Compaq 6720s on Windows Vista 32-bit, Core2 Duo (T5670, 1.8GHz), 2GB RAM, 160GB HDD Mac mini on Mac OS X 10.7, Core i5 (2.3 GHz), 2GB RAM, 500GB HDD I know... Not the latest and greatest but a decent combination to work with. New system(s) is/are already on the shopping list but I live in the 'wrong' country to buy computer hardware. So, the next trip abroad will provide me with some new stuff. Using multiple operating systems The list of hardware above already names different operating systems, and actually I have only one preference: Linux. But still my job as a software craftsman for Visual FoxPro and .NET development requires other OSes, too. Not a big deal, it's just like this. Additionally to those physical machines, there are a bunch of virtual machines around. Most of them running either Windows XP or Windows 7. Since years I have the practice that each development for one customer is isolated into its own virtual machine and environment. This keeps it clean and version-safe. But as you can easily imagine with that setup there are a couple of constraints referring to keyboard and mouse. Usually, those systems require their own pieces of hardware attached. As stated, I don't like clutter on my desk's surface, so a cross-platform solution has to come in here. In the past, I tried it with various applications, hardware or network protocols like X11, RDP, NX, TeamViewer, RAdmin, KVM switch, etc. but the problem in this case is that they either allow you to remotely connect to the other system or exclusively 'bind' your peripherals to the active system. Not optimal after all. Synergy to the rescue Quote from their website: "Synergy lets you easily share your mouse and keyboard between multiple computers on your desk, and it's Free and Open Source. Just move your mouse off the edge of one computer's screen on to another. You can even share all of your clipboards. All you need is a network connection. Synergy is cross-platform (works on Windows, Mac OS X and Linux)." Yep, that's it! All I need for my setup here... Actually, I couldn't believe it myself that I didn't stumble over synergy earlier but 'Get over it' and there we go. And despite the fact that it is Open Source, no, it's also for free. Donations for the developers are very welcome and recently they introduced Synergy Premium. A possibility to buy so-called premium votes that can be used to put more weight / importance on specific issues or bugs that you would like the developers to look into. Installation and configuration Simply download the installation packages for your systems of choice, run the installer and enter some minor information about your network setup. I chose my desktop machine for the role of the Synergy server and configured my screen setup as follows: The screen setup allows you currently to build or connect up to 15 machines. The number of screens can be higher as those machine might have multiple screens physically attached. Synergy takes this into the overall calculations and simply works as expected. I tried it for fun with a second monitor each connected to both laptops to have a total number of 6 active screens. No flaws after all - stunning! All the other machines are configured as clients like so: Side note: The screenshot was taken on Windows 8 and pasted via clipboard into Gimp running on Ubuntu. Resume Synergy is now definitely in my box of tools for my daily work, and amongst the first pieces of software I install after the operating system. It just simplifies my life and cleans my desk. Never again without Synergy!Now, only waiting for an Android version to integrate my Galaxy Tab 10.1, too. ;-) Please, check out that superb product and enjoy sharing one keyboard, one mouse and one clipboard between your various machines and operating systems.

    Read the article

  • Reading the tea leaves from Windows Azure support

    - by jamiet
    A few idle thoughts… Three months ago I had an issue regarding Windows Azure where I was unable to login to the management portal. At the time I contacted Azure support, the issue was soon resolved and I thought no more about it. Until today that is when I received an email from Azure support providing a detailed analysis of the root cause, the fix and moreover precise details about when and where things occurred. The email itself is interesting and I have included the entirety of it below. A few things were interesting to me: The level of detail and the diligence in investigating and reporting the issue I found really rather impressive. They even outline the number of users that were affected (127 in case you can’t be bothered reading). Compare this to the quite pathetic support that another division within Microsoft, Skype, provided to Greg Low recently: Skype support and dead parrot sketches   This line: “Windows Azure performed a planned change from using the Microsoft account service (formerly Windows Live ID) to the Azure Active Directory (AAD) as its primary authentication mechanism on August 24th. This change was made to enable future innovation in the area of authentication – particularly for organizationally owned identities, identity federation, stronger authentication methods and compliance certification. ” I also found to be particularly interesting. I have long thought that one of the reasons Microsoft has proved to be such a money-making machine in the enterprise is because they provide the infrastructure and then upsell on top of that – and nothing is more infrastructural than Active Directory. It has struck me of late that they are trying to make the same play of late in the cloud by tying all their services into Azure Active Directory and here we see a clear indication of that by making AAD the authentication mechanism for anyone using Windows Azure. I get the feeling that we’re going to hear much much more about AAD in the future; isn’t it about time we could log on to SQL Azure Windows Azure SQL Database without resorting to SQL authentication, for example? And why do Microsoft have two identity providers – Microsoft Account (aka Windows Live ID) and AAD – isn’t it about time those things were combined? As I said, just some idle thoughts. Below is the transcript of the email if you are interested. @Jamiet  This is regarding the support request <redacted> where in you were not able to login into the windows azure management portal with live id. We are providing you with the summary, root cause analysis and information about permanent fix: Incident Title: You were unable to access Windows Azure Portal after Microsoft Account to Azure Active Directory account Migration. Service Impacted: Management Portal Incident Start Date and Time: 8/24/2012 4:30:00 PM Date and Time Service was Restored: 10/17/2012 12:00:00 AM Summary: Windows Azure performed a planned change from using the Microsoft account service (formerly Windows Live ID) to the Azure Active Directory (AAD) as its primary authentication mechanism on August 24th.   This change was made to enable future innovation in the area of authentication – particularly for organizationally owned identities, identity federation, stronger authentication methods and compliance certification.   While this migration was largely transparent to Windows Azure users, a small number of users whose sign-in names were part of a Windows Live Custom Domain were unable to login.   This incompatibility was not discovered during the Quality Assurance testing phase prior to the migration. Customer Impact: Customers whose sign-in names were part of a Windows Live Custom Domain were unable to sign-in the Management Portal after ~4:00 p.m. PST on August 24th, 2012.   We determined that the issue did impact at least 127 users in 98 of these Windows Live Custom Domains and had a maximum potential impact of 1,110 users in total. Root Cause: The root cause of the issue was an incompatibility in the AAD authentication service to handle logins from Microsoft accounts whose sign-in names were part of a Windows Live Custom Domains.  This issue was not discovered during the Quality Assurance testing phase prior to the migration from Microsoft Account (MSA) to AAD. Mitigations: The issue was mitigated for the majority of affected users by 8:20 a.m. PST on August 25th, 2012 by running some internal scripts to correct many known Windows Live Custom Domains.   The remaining affected domains fell into two categories: Windows Live Custom Domains that were not corrected by 8/25/2012. An additional 48 Windows Live Custom Domains were fixed in the weeks following the incident within 2 business days after the AAD team received an escalation from product support regarding those accounts. Windows Live Custom domains that were also provisioned in Office365. Some of the affected Windows Live Custom Domains had already been provisioned in AAD because their owners signed up for Office365 which is a service that also uses AAD.   In these cases the Azure customers had to work around the issue by renaming their Microsoft Account or using a different Microsoft Account to administer their Azure subscription. Permanent Fix: The Azure Active Directory team permanently fixed the issue for all customers on 10/17/2012 in an upgraded release of the AAD service.

    Read the article

  • Access Control Service V2 and Facebook Integration

    - by Your DisplayName here!
    I haven’t been blogging about ACS2 in the past because it was not released and I was kinda busy with other stuff. Needless to say I spent quite some time with ACS2 already (both in customer situations as well as in the classroom and at conferences). ACS2 rocks! It’s IMHO the most interesting and useful (and most unique) part of the whole Azure offering! For my talk at VSLive yesterday, I played a little with the Facebook integration. See Steve’s post on the general setup. One claim that you get back from Facebook is an access token. This token can be used to directly talk to Facebook and query additional properties about the user. Which properties you have access to depends on which authorization your Facebook app requests. You can specify this in the identity provider registration page for Facebook in ACS2. In my example I added access to the home town property of the user. Once you have the access token from ACS you can use e.g. the Facebook SDK from Codeplex (also available via NuGet) to talk to the Facebook API. In my sample I used the WIF ClaimsAuthenticationManager to add the additional home town claim. This is not necessarily how you would do it in a “real” app. Depends ;) The code looks like this (sample code!): public class ClaimsTransformer : ClaimsAuthenticationManager {     public override IClaimsPrincipal Authenticate( string resourceName, IClaimsPrincipal incomingPrincipal)     {         if (!incomingPrincipal.Identity.IsAuthenticated)         {             return base.Authenticate(resourceName, incomingPrincipal);         }         string accessToken;         if (incomingPrincipal.TryGetClaimValue( "http://www.facebook.com/claims/AccessToken", out accessToken))         {             try             {                 var home = GetFacebookHometown(accessToken);                 if (!string.IsNullOrWhiteSpace(home))                 {                     incomingPrincipal.Identities[0].Claims.Add( new Claim("http://www.facebook.com/claims/HomeTown", home));                 }             }             catch { }         }         return incomingPrincipal;     }      private string GetFacebookHometown(string token)     {         var client = new FacebookClient(token);         dynamic parameters = new ExpandoObject();         parameters.fields = "hometown";         dynamic result = client.Get("me", parameters);         return result.hometown.name;     } }  

    Read the article

  • quartz throws deadlocks under load

    - by Khandelwal
    We are using Quartz with Spring and our configuration is throwing deadlocks when quartz has more than 1 thread configured. I'm starting to believe that it's because we don't have our quartz configured correctly with Spring, but I can't find enough documentation on how to configure the two to play nicely. We are running on both Windows and Linux environments - pointing at MSSQL and Oracle DBs. With both OS, using either DB, we can throw the following deadlock errors... We're consistently throwing these deadlock errors. We run under heavy load, inserting hundreds of quartz triggers in a matter of minutes. 2010-03-17 18:52:31,737 [] [] ERROR [DFScheduler_Worker-42] core.ErrorLogger core.ErrorLogger (QuartzScheduler.java:2185) - An error occured while marking executed job complete. job= 'BPM.6e41a6567f0000020100362a51dc7a50' org.quartz.JobPersistenceException: Couldn't remove trigger: Transaction (Process ID 87) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction. [See nested exception: com.microsoft.sqlserver.jdbc.SQLServerException: Transaction (Process ID 87) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.] at org.quartz.impl.jdbcjobstore.JobStoreSupport.removeTrigger(JobStoreSupport.java:1469)at org.quartz.impl.jdbcjobstore.JobStoreSupport.triggeredJobComplete(JobStoreSupport.java:2978)at org.quartz.impl.jdbcjobstore.JobStoreSupport$39.execute(JobStoreSupport.java:2962) at org.quartz.impl.jdbcjobstore.JobStoreSupport$41.execute(JobStoreSupport.java:3713)at org.quartz.impl.jdbcjobstore.JobStoreSupport.executeInNonManagedTXLock(JobStoreSupport.java:3747) at org.quartz.impl.jdbcjobstore.JobStoreSupport.executeInNonManagedTXLock(JobStoreSupport.java:3709)at org.quartz.impl.jdbcjobstore.JobStoreSupport.triggeredJobComplete(JobStoreSupport.java:2958)at org.quartz.core.QuartzScheduler.notifyJobStoreJobComplete(QuartzScheduler.java:1727)at org.quartz.core.JobRunShell.run(JobRunShell.java:273) at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:534) Caused by: com.microsoft.sqlserver.jdbc.SQLServerException: Transaction (Process ID 87) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction. at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(Unknown Source at com.microsoft.sqlserver.jdbc.SQLServerStatement.getNextResult(Unknown Source) at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.doExecutePreparedStatement(Unknown Source) at ... org.quartz.impl.jdbcjobstore.StdJDBCDelegate.deleteSimpleTrigger(StdJDBCDelegate.java:1820) at org.quartz.impl.jdbcjobstore.JobStoreSupport.deleteTriggerAndChildren(JobStoreSupport.java:1345 at org.quartz.impl.jdbcjobstore.JobStoreSupport.removeTrigger(JobStoreSupport.java:1453 ... 9 more

    Read the article

  • Intellij Javafx artifact - how do you make it?

    - by Louis Sayers
    I've been trying all day to turn my javafx application into a jar file. I'm using Java 1.7 update 7. Oracle has some information, but it just seems scattered all over the place. Intellij is almost doing the job, but I get the following error: java.lang.NoClassDefFoundError: javafx/application/Application Which seems to say that I need to tell java where the jfxrt.jar is... If I add this jar to my classpath info for the manifest build in intellij - ctrl+shift+alt+s - Artifacts - Output Layout tab - Class Path, then I get another error: Could not find or load main class com.downloadpinterest.code.Main It seems strange to have to include jfxrt.jar in my class path though... I've tried to create an ant script as well, but I feel like IntelliJ is 90% of the way there - and I just need a bit of help to figure out why I need to include jfxrt.jar, and why my Main class isn't being found (I'm guessing I need to add it to the classpath somehow?). Could anyone clue me up as to what is happening? I had a basic gui before which worked fine, but JavaFX seems to be making life complicated!

    Read the article

  • NHibernate Generators

    - by Dan
    What is the best tool for generating Entity Class and/or hbm files and/or sql script for NHibernate. This list below is from http://www.hibernate.org/365.html, which is the best any why? Moregen Free, Open Source (GPL) O/R Generator that can merge into existing Visual Studio Projects. Also merges changes to generated classes. NConstruct Lite Free tool for generating NHibernate O/R mapping source code. Different databases support (Microsoft SQL Server, Oracle, Access). GENNIT NHibernate Code Generator Free/Commercial Web 2.0 code generation of NHibernate code using WYSIWYG online UML designer. GenWise Studio with NHibernate Template Commercial product; Imports your existing database and generates all XML and Classes, including factories. It can also generate a asp.net web-application for your NHibernate BO-Layer automatically. HQL Analyzer and hbm.xml GUI Editor ObjectMapper by Mats Helander is a mapping GUI with NHibernate support MyGeneration is a template-based code generator GUI. Its template library includes templates for generating mapping files and classes from a database. AndroMDA is an open-source code generation framework that uses Model Driven Architecture (MDA) to transform UML models into deployable components. It supports generation of data access layers that use NHibernate as their persistence framework. CodeSmith Template for NH NHibernate Helper Kit is a VS2005 add-in to generate classes and mapping files. NConstruct - Intelligent Software Factory Commercial product; Full .NET C# source code generation for all tiers of the information system trough simple wizard procedure. O/R mapping based on NHibernate. For both WinForms and ASP.NET 2.0.

    Read the article

  • Implementation review for a MVC.NET app with custom membership

    - by mrjoltcola
    I'd like to hear if anyone sees any problems with how I implemented the security in this Oracle based MVC.NET app, either security issues, concurrency issues or scalability issues. First, I implemented a CustomOracleMembershipProvider to handle the database interface to the membership store. I implemented a custom Principal named User which implements IPrincipal, and it has a hashtable of Roles. I also created a separate class named AuthCache which has a simple cache for User objects. Its purpose is simple to avoid return trips to the database, while decoupling the caching from either the web layer or the data layer. (So I can share the cache between MVC.NET, WCF, etc.) The MVC.NET stock MembershipService uses the CustomOracleMembershipProvider (configured in web.config), and both MembershipService and FormsService share access to the singleton AuthCache. My AccountController.LogOn() method: 1) Validates the user via the MembershipService.Validate() method, also loads the roles into the User.Roles container and then caches the User in AuthCache. 2) Signs the user into the Web context via FormsService.SignIn() which accesses the AuthCache (not the database) to get the User, sets HttpContext.Current.User to the cached User Principal. In global.asax.cs, Application_AuthenticateRequest() is implemented. It decrypts the FormsAuthenticationTicket, accesses the AuthCache by the ticket.Name (Username) and sets the Principal by setting Context.User = user from the AuthCache. So in short, all these classes share the AuthCache, and I have, for thread synchronization, a lock() in the cache store method. No lock in the read method. The custom membership provider doesn't know about the cache, the MembershipService doesn't know about any HttpContext (so could be used outside of a web app), and the FormsService doesn't use any custom methods besides accessing the AuthCache to set the Context.User for the initial login, so it isn't dependent on a specific membership provider. The main thing I see now is that the AuthCache will be sharing a User object if a user logs in from multiple sessions. So I may have to change the key from just UserId to something else (maybe using something in the FormsAuthenticationTicket for the key?).

    Read the article

  • Can xjc -version be trusted?

    - by JasonPlutext
    I've spent the day debugging an issue with JAXB getting namespaces wrong or missing (possibly related to Marshaller.JAXB_FRAGMENT, but that's not the point here). I found the problem occurs with JAXB RI 2.1.10 in my endorsed dir. It is fixed if I use JAXB RI 2.2.4 or 2.2.6 Here is what is really confusing (and what made it take so long). The problem occurs on Linux with: $ java -version java version "1.7.0_03" OpenJDK Runtime Environment (IcedTea7 2.1.1pre) (7~u3-2.1.1~pre1-1ubuntu2) OpenJDK 64-Bit Server VM (build 22.0-b10, mixed mode) $ xjc -version xjc 2.2.4 but it should work fine, if this java really uses JAXB RI 2.2.4 !! Similarly, I can't reproduce the issue on Windows with Java 1.6.0_27, which reports: C:\Program Files\Java\jdk1.6.0_27\bin>java -version java version "1.6.0_27" Java(TM) SE Runtime Environment (build 1.6.0_27-b07) Java HotSpot(TM) 64-Bit Server VM (build 20.2-b06, mixed mode) C:\Program Files\Java\jdk1.6.0_27\bin>xjc -version xjc version "JAXB 2.1.10 in JDK 6" JavaTM Architecture for XML Binding(JAXB) Reference Implementation, (build JAXB 2.1.10 in JDK 6) and yet if I put 2.1.10 RI in my endorsed dir, the problem occurs. It should occur with 1.6.0_27, if that really uses JAXB RI 2.1.10. It seems to me that the problem I'm experiencing has been fixed in the reference implementation somewhere after 2.1.10 and before 2.2.4, but that neither of the 2 VM's above actually use the JAXB version they claim to. Or possibly they use the xjc they claim, but not what is in jaxb-api.jar and jaxb-impl.jar (I know there is a difference in the namespace prefix mapper property, but that won't be causing this problem). I've done these experiments on Win 7 and Ubuntu, in tomcat (no eclipse), and in eclipse (no tomcat), so I'm pretty confident I'm explaining my findings correctly. Can anyone provide any insight into what is happening? If I'm right, does anyone know what versions of JAXB the various Sun/Oracle JDKs really use?

    Read the article

  • Using hibernate criteria, is there a way to escape special characters?

    - by Kevin Crowell
    For this question, we want to avoid having to write a special query since the query would have to be different across multiple databases. Using only hibernate criteria, we want to be able to escape special characters. This situation is the reason for needing the ability to escape special characters: Assume that we have table 'foo' in the database. Table 'foo' contains only 1 field, called 'name'. The 'name' field can contain characters that may be considered special in a database. Two examples of such a name are 'name_1' and 'name%1'. Both the '_' and '%' are special characters, at least in Oracle. If a user wants to search for one of these examples after they are entered in the database, problems may occur. criterion = Restrictions.ilike("name", searchValue, MatchMode.ANYWHERE); return findByCriteria(null, criterion); In this code, 'searchValue' is the value that the user has given the application to use for its search. If the user wants to search for '%', the user is going to be returned with every 'foo' entry in the database. This is because the '%' character represents the "any number of characters" wildcard for string matching and the SQL code that hibernate produces will look like: select * from foo where name like '%' Is there a way to tell hibernate to escape certain characters, or to create a workaround that is not database type sepecific?

    Read the article

  • Improving Performance of Crystal Reports using Stored Procedures

    - by mjh41
    Recently I updated a Crystal Report that was doing all of its work on the client-side (Selects, formulas, etc) and changed all of the logic to be done on the server-side through Stored Procedures using an Oracle 11g database. Now the report is only being used to display the output of the stored procedures and nothing else. Everything I have read on this subject says that utilizing stored procedures should greatly reduce the running time of the report, but it still takes roughly the same amount of time to retrieve the data from the server. Is there something wrong with the stored procedure I have written, or is the issue in the Crystal Report itself? Here is the stored procedure code along with the package that defines the necessary REF CURSOR. CREATE OR REPLACE PROCEDURE SP90_INVENTORYDATA_ALL ( invdata_cur IN OUT sftnecm.inv_data_all_pkg.inv_data_all_type, dCurrentEndDate IN vw_METADATA.CASEENTRCVDDATE%type, dCurrentStartDate IN vw_METADATA.CASEENTRCVDDATE%type ) AS BEGIN OPEN invdata_cur FOR SELECT vw_METADATA.CREATIONTIME, vw_METADATA.RESRESOLUTIONDATE, vw_METADATA.CASEENTRCVDDATE, vw_METADATA.CASESTATUS, vw_METADATA.CASENUMBER, (CASE WHEN vw_METADATA.CASEENTRCVDDATE < dCurrentStartDate AND ( (vw_METADATA.CASESTATUS is null OR vw_METADATA.CASESTATUS != 'Closed') OR TO_DATE(vw_METADATA.RESRESOLUTIONDATE, 'MM/DD/YYYY') >= dCurrentStartDate) then 1 else 0 end) InventoryBegin, (CASE WHEN (to_date(vw_METADATA.RESRESOLUTIONDATE, 'MM/DD/YYYY') BETWEEN dCurrentStartDate AND dCurrentEndDate) AND vw_METADATA.RESRESOLUTIONDATE is not null AND vw_METADATA.CASESTATUS is not null then 1 else 0 end) CaseClosed, (CASE WHEN vw_METADATA.CASEENTRCVDDATE BETWEEN dCurrentStartDate AND dCurrentEndDate then 1 else 0 end) CaseCreated FROM vw_METADATA WHERE vw_METADATA.CASEENTRCVDDATE <= dCurrentEndDate ORDER BY vw_METADATA.CREATIONTIME, vw_METADATA.CASESTATUS; END SP90_INVENTORYDATA_ALL; And the package: CREATE OR REPLACE PACKAGE inv_data_all_pkg AS TYPE inv_data_all_type IS REF CURSOR RETURN inv_data_all_temp%ROWTYPE; END inv_data_all_pkg;

    Read the article

  • Web service request ignores basic WSDL XML element restrictions

    - by Oliver
    Hi all, I have encountered some difficulties while trying to validate an incoming soap message to a service I have running on JBoss AS (v. 5.1.0). In my code, I have explicitly set some fields to be required, eg: public class MyClass { @XmlElement(required=true, nillable=false) private List<myOtherObjects> myList; } This requirement is also reflected in the WSDL (note the lack of minOccurs="0"): <xs:element maxOccurs="unbounded" name="myList" type="tns:myOtherObjects" /> However, when I do a test soap message that has myList set to empty or null, these restrictions are completely ignored, forcing me to validate manually within the application logic on the service. I did some searching on the Internet and found out that, on WebLogic, the validation does not seem to be enabled by default, though it can be turned on by modifying the weblogic-webservices.xml file. (http://forums.oracle.com/forums/thread.jspa?threadID=783972&tstart=115) I’m wondering if there is something similar that I have to do with JBoss AS to enable automatic validation before the soap message reaches the service. Any help would be greatly appreciated. Oliver

    Read the article

  • Group testNG tests without annotations

    - by diy
    Hi gents and ladies, I'm responsible for allowing unit tests for one of ETL components.I want to acomplish this using testNG with generic java test class and number of test definitions in testng.xmlpassing various parameters to the class.Oracle and ETL guys should be able to add new tests without changing the java code, so we need to use xml suite file instead of annotations. Question Is there a way to group tests in testng.xml?(similarly to how it is done with annotations) I mean something like <group name="first_group"> <test> <class ...> <parameter ...> </test> </group> <group name="second_group"> <test> <class ...> <parameter ...> </test> </group> I've checked the testng.dtd as figured out that similar syntax is not allowed.But is therea workaround to allow grouping? Thanks in advance

    Read the article

  • Accessing Item Fields via Sitecore Web Service

    - by Catch
    I am creating items on the fly via Sitecore Web Service. So far I can create the items from this function: AddFromTemplate And I also tried this link: http://blog.hansmelis.be/2012/05/29/sitecore-web-service-pitfalls/ But I am finding it hard to access the fields. So far here is my code: public void CreateItemInSitecore(string getDayGuid, Oracle.DataAccess.Client.OracleDataReader reader) { if (getDayGuid != null) { var sitecoreService = new EverBankCMS.VisualSitecoreService(); var addItem = sitecoreService.AddFromTemplate(getDayGuid, templateIdRTT, "Testing", database, myCred); var getChildren = sitecoreService.GetChildren(getDayGuid, database, myCred); for (int i = 0; i < getChildren.ChildNodes.Count; i++) { if (getChildren.ChildNodes[i].InnerText.ToString() == "Testing") { var getItem = sitecoreService.GetItemFields(getChildren.ChildNodes[i].Attributes[0].Value, "en", "1", true, database, myCred); string p = getChildren.ChildNodes[i].Attributes[0].Value; } } } } So as you can see I am creating an Item and I want to access the Fields for that item. I thought that GetItemFields will give me some value, but finding it hard to get it. Any clue?

    Read the article

  • Trying to use a authlogic-connect as a plugin in place of gem - Server doesn't start

    - by Arkid
    I am trying to use Authlogic-connect as a plugin in Rails 3 in place of a gem. I have made an entry in the gemfile as gem "authlogic-connect", :require => "authlogic-connect", :path => "localgems" Now when I run the bundle install, it runs fine. When I try to start the server i get the error Could not find gem 'authlogic-connect (>= 0, runtime)' in source at localgems. Source does not contain any versions of 'authlogic-connect (>= 0, runtime)' Try running `bundle install`. I have placed the unzipped Gem renamed as authlogic-connect in the localgems folder. what is the problem? Here is what I get on using rails plugin install arkidmitra$ rails plugin install git://github.com/viatropos/authlogic-connect.git Usage: rails new APP_PATH [options] Options: [--skip-gemfile] # Don't create a Gemfile -d, [--database=DATABASE] # Preconfigure for selected database (options: mysql/oracle/postgresql/sqlite3/frontbase/ibm_db) # Default: sqlite3 -O, [--skip-active-record] # Skip Active Record files [--dev] # Setup the application with Gemfile pointing to your Rails checkout -J, [--skip-prototype] # Skip Prototype files -T, [--skip-test-unit] # Skip Test::Unit files -G, [--skip-git] # Skip Git ignores and keeps -r, [--ruby=PATH] # Path to the Ruby binary of your choice # Default: /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby -m, [--template=TEMPLATE] # Path to an application template (can be a filesystem path or URL) -b, [--builder=BUILDER] # Path to an application builder (can be a filesystem path or URL) [--edge] # Setup the application with Gemfile pointing to Rails repository Runtime options: -q, [--quiet] # Supress status output -s, [--skip] # Skip files that already exist -f, [--force] # Overwrite files that already exist -p, [--pretend] # Run but do not make any changes Rails options: -h, [--help] # Show this help message and quit -v, [--version] # Show Rails version number and quit Description: The 'rails new' command creates a new Rails application with a default directory structure and configuration at the path you specify. Example: rails new ~/Code/Ruby/weblog This generates a skeletal Rails installation in ~/Code/Ruby/weblog. See the README in the newly created application to get going.

    Read the article

  • How to register assemblies using Windsor in ASP.NET MVC

    - by oz
    This is how my project looks: TestMvc (my web project) has a reference to the DomainModel.Core assembly where my interfaces and business objects reside. The class that implements the interfaces in DomainModel.Core is in a different assembly called DomainModel.SqlRepository; the reason behind it is that if I just want to create a repository for Oracle I just have to deploy the new dll, change the web.config and be done with it. When I build the solution, if I look at the \bin folder of my TestMvc project, there is no reference to the DomainModel.SqlRepository, which makes sense because it's not being reference anywhere. Problem arises when my windsor controller factory tries to resolve that assembly, since it's not on the \bin directory. So is there a way to point windsor to a specific location, without adding a reference to that assembly? My web.config looks like this: <component id="UserService" service="TestMvc.DomainModel.Core.Interface, TestMvc.DomainModel.Core" type="TestMvc.DomainModel.SqlRepository.Class, TestMvc.DomainModel.SqlRepository" lifestyle="PerWebRequest" /> There's many ways around this, like copying the dll as part of the build, add the reference to the project so it will get copied to the \bin folder or install it on the GAC and add an assembly reference in the web.config. I guess my question is specific to Windsor, to see if I can give the location of my assembly and it will resolve it.

    Read the article

  • Windows Service Conundrum

    - by Paul Johnson
    All, I have a Custom object which I have written using VB.NET (.net 2.0). The object instantiates its own threading.timer object and carries out a number of background process including periodic interrogation of an oracle database and delivery of emails via smtp according to data detected in the database. The following is the code implemented in the windows service class Public Class IncidentManagerService 'Fakes Private _fakeRepoFactory As IRepoFactory Private _incidentRepo As FakeIncidentRepo Private _incidentDefinitionRepo As FakeIncidentDefinitionRepo Private _incManager As IncidentManager.Session 'Real Private _started As Boolean = False Private _repoFactory As New NHibernateRepoFactory Private _psalertsEventRepo As IPsalertsEventRepo = _repoFactory.GetPsalertsEventRepo() Protected Overrides Sub OnStart(ByVal args() As String) ' Add code here to start your service. This method should set things ' in motion so your service can do its work. If Not _started Then Startup() _started = True End If End Sub Protected Overrides Sub OnStop() 'Tear down class variables in order to ensure the service stops cleanly _incManager.Dispose() _incidentDefinitionRepo = Nothing _incidentRepo = Nothing _fakeRepoFactory = Nothing _repoFactory = Nothing End Sub Private Sub Startup() Dim incidents As IList(Of Incident) = Nothing Dim incidentFactory As New IncidentFactory incidents = IncidentFactory.GetTwoFakeIncidents _repoFactory = New NHibernateRepoFactory _fakeRepoFactory = New FakeRepoFactory(incidents) _incidentRepo = _fakeRepoFactory.GetIncidentRepo _incidentDefinitionRepo = _fakeRepoFactory.GetIncidentDefinitionRepo 'Start an incident manager session _incManager = New IncidentManager.Session(_incidentRepo, _incidentDefinitionRepo, _psalertsEventRepo) _incManager.Start() End Sub End Class After a little bit of experimentation I arrived at the above code in the OnStart method. All functionality passed testing when deployed from VS2005 on my development PC, however when deployed on a true target machine, the service would not start and responds with the following message: "The service on local computer started and then stopped..." Am I going about this the correct way? If not how can I best implement my incident manager within the confines of the Windows Service class. It seems pointless to implement a timer for the incidentmanager because this already implements its own timer... Any assistance much appreciated. Kind Regards Paul J.

    Read the article

  • Good resources for building web-app in Tapestry

    - by Rich
    Hi, I'm currently researching into Tapestry for my company and trying to decide if I think we can port our pre-existing proprietary web applications to something better. Currently we are running Tomcat and using JSP for our front end backed by our own framework that eventually uses JDBC to connect to an Oracle database. I've gone through the Tapestry tutorial, which was really neat and got me interested, but now I'm faced with what seems to be a common issue of documentation. There are a lot of things I'd need to be sure that I could accomplish with Tapestry before I'd be ready to commit fully to it. Does anyone have any good resources, be it a book or web article or anything else, that go into more detail beyond what the Tapestry tutorial explains? I am also considering integrating with Hibernate, and have read a little bit about Spring too. I'm still having a hard time understanding how Spring would be more useful than cumbersome in tandem with Tapestry,as they seem to have a lot of overlapping features. An example I read seemed to use Spring to interface with Hibernate, and then Tapestry to Spring, but I was under the impression Tapestry integrates to the same degree with Hibernate. The resource I'm speaking of is http://wiki.apache.org/tapestry/Tapstry5First_project_with_Tapestry5,_Spring_and_Hibernate . I was interested because I hadn't found information anywhere else on how to maintain user levels and sessions through a Tapestry application before, but wasn't exactly impressed by the need to use Spring in the example.

    Read the article

  • Listener error not connecting

    - by Sham
    I have two database running on Port No : 1521. When i m connecting to ORCL db it get's connected, but when i try to connect to another DB it gives me following error. ORA-12514: TNS:listener does not currently know of service requested in connect descriptor. My Listener: # listener.ora Network Configuration File: C:\app\Administrator\product\11.2.0 \dbhome_1\network\admin\listener.ora # Generated by Oracle configuration tools. ADMIN_RESTRICTIONS_LISTENER = ON LISTENER = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = 127.1.1.1)(PORT = 1521)) ) ) ADR_BASE_LISTENER = C:\app\Administrator TNSNAMES.ora ORCL = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = 127.1.1.1)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = orcl) ) ) PARIVARTAN = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = 127.1.1.1)(PORT = 1521)) ) (CONNECT_DATA = (SERVICE_NAME = Parivartan) ) ) Lsnrctl Result STATUS of the LISTENER ------------------------ Alias LISTENER Version TNSLSNR for 64-bit Windows: Version 11.2.0.1.0 - Production Start Date 14-DEC-2012 14:22:51 Uptime 0 days 0 hr. 19 min. 31 sec Trace Level off Security ON: Local OS Authentication SNMP OFF Listener Parameter File C:\app\Administrator\product\11.2.0\dbhome_1\network\a dmin\listener.ora Listener Log File c:\app\administrator\diag\tnslsnr\127.1.1.1\listener\al ert\log.xml Listening Endpoints Summary... (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=127.1.1.1)(PORT=1521))) Services Summary... Service "orcl" has 1 instance(s). Instance "orcl", status READY, has 1 handler(s) for this service... Service "orclXDB" has 1 instance(s). Instance "orcl", status READY, has 1 handler(s) for this service... The command completed successfully Reply me soon....

    Read the article

  • How to get a handle on all this middleware?

    - by jkohlhepp
    My organization has recently been wrestling the question of whether we should be incorporating different middleware products / concepts into our applications. Products we are looking at are things like Pegasystems, Oracle BPM / BPEL, BizTalk, Fair Isaac Blaze, etc., etc., etc. But I'm having a hard time getting a handle on all this. Before I go forward with evaluating the usefulness (positive or negative) of these different products I'm trying to get an understanding of all the different concepts in this space. I'm overwhelmed with an alphabet soup of BPM, ESB, SOA, CEP, WF, BRE, ERP, etc. Some products seem to cover one or more of those aspects, others focus on doing one. The terms all seem very ambiguous and conflated with each other. Is there a good resource out there to get a handle on all these different middleware concepts / patterns? A book? A website? An article that sums it up well? Bonus points if there is a resource that maps the various popular products into which pattern(s) they address. Thanks, ~ Justin

    Read the article

  • Practical size limitations for RDBMS

    - by grenade
    I am working on a project that must store very large datasets and associated reference data. I have never come across a project that required tables quite this large. I have proved that at least one development environment cannot cope at the database tier with the processing required by the complex queries against views that the application layer generates (views with multiple inner and outer joins, grouping, summing and averaging against tables with 90 million rows). The RDBMS that I have tested against is DB2 on AIX. The dev environment that failed was loaded with 1/20th of the volume that will be processed in production. I am assured that the production hardware is superior to the dev and staging hardware but I just don't believe that it will cope with the sheer volume of data and complexity of queries. Before the dev environment failed, it was taking in excess of 5 minutes to return a small dataset (several hundred rows) that was produced by a complex query (many joins, lots of grouping, summing and averaging) against the large tables. My gut feeling is that the db architecture must change so that the aggregations currently provided by the views are performed as part of an off-peak batch process. Now for my question. I am assured by people who claim to have experience of this sort of thing (which I do not) that my fears are unfounded. Are they? Can a modern RDBMS (SQL Server 2008, Oracle, DB2) cope with the volume and complexity I have described (given an appropriate amount of hardware) or are we in the realm of technologies like Google's BigTable? I'm hoping for answers from folks who have actually had to work with this sort of volume at a non-theoretical level.

    Read the article

  • JVM/CLR Source-compatible Language Options

    - by Nathan Voxland
    I have an open source Java database migration tool (http://www.liquibase.org) which I am considering porting to .Net. The majority of the tool (at least from a complexity side) is around logic like "if you are adding a primary key and the database is Oracle use this SQL. If database is MySQL use this SQL. If the primary key is named and the database is Postgres use this SQL". I could fork the Java codebase and covert it (manually and/or automatically), but as updates and bug fixes to the above logic come in I do not want to have to apply it to both versions. What I would like to do is move all that logic into a form that can be compiled and used by both Java and .Net versions naively. The code I am looking to convert does not contain any advanced library usage (JDBC, System.out, etc) that would vary significantly from Java to .Net, so I don't think that will be an issue (at worst it can be designed around). So what I am looking for is: A language in which I can code common parts of my app in and compile it into classes usable by the "standard" languages on the target platform Does not add any runtime requirements to the system Nothing so strange that it scares away potential contributors I know Python and Ruby both have implementations on for the JVM and CLR. How well do they fit my requirements? Has anyone been successful (or unsuccesful) using this technique for cross-platform applications? Are there any gotcha's I need to worry about?

    Read the article

  • EJB3Unit testing no-tx-datasource

    - by justastefan
    Hello, I am doing tests on an ejb3-project using ejb3unit http://ejb3unit.sourceforge.net/Session-Bean.html for testing. All my Services long for @PersistenceContext (UnitName=bla). I set up the ejb3unit.properties like this: ejb3unit_jndi.1.isSessionBean=true ejb3unit_jndi.1.jndiName=ejb/MyServiceBean ejb3unit_jndi.1.className=com.company.project.MyServiceBean everything works with the in-memory-database. So now i want additionally test another servicebean with @PersistenceContext (UnitName=noTxDatasource) that goes for a defined in my datasources.xml: <datasources> <local-tx-datasource> ... </local-tx-datasource> <no-tx-datasource> <jndi-name>noTxDatasource</jndi-name> <connection-url>...</connection-url> <driver-class>oracle.jdbc.OracleDriver</driver-class> <user-name>bla</user-name> <password>bla</password> </no-tx-datasource> </datasources> How do I tell ejb3unit to make this work: Object object = InitialContext.doLookup("java:/noTxDatasource"); if (object instanceof DataSource) { return ((DataSource) object).getConnection(); } else { return null; } Currently it fails saying: javax.NamingException: Cannot find the name (noTxDataSource) in the JNDI tree Current bindings: (ejb/MyServiceBean=com.company.project.MyServiceBean) How can I add this no-tx-datasource to the jndi bindings?

    Read the article

  • How to connect to SQLServer 2k5 using Ruby 1.8.7 over W2k3 with active record 2.3.5

    - by Luke
    Hi all, sorry for the blast. I'm trying to connect to an SQLServer 2k5 using Ruby 1.8.7 over W2k3 with active record 2.3.5. But, when I ran 'rake migrate' it throws the following: rake migrate --trace Hoe.new {...} deprecated. Switch to Hoe.spec. Invoke migrate (first_time) Invoke environment (first_time) Execute environment Execute migrate rake aborted! no such file to load -- odbc (...) C:/Program Files/test/Rakefile:146 (...) So, my Rakefile in the line 146 says: ActiveRecord::Migrator.migrate('db/migrate', ENV["VERSION"] ? ENV["VERSION"].to_i : nil ) The database.yml has been configured in so many ways without success. I've tried setup to mode in odbc, to configure a system dsn, to completely use the activerecord support for sqlserver but no success at all. The same Rakefile works fine over Postgres and Oracle with the proper gems installed off course. But I cann't get this work. Any help will be appreciated. Thanks in advance!

    Read the article

< Previous Page | 713 714 715 716 717 718 719 720 721 722 723 724  | Next Page >