Search Results

Search found 24720 results on 989 pages for 'path info'.

Page 361/989 | < Previous Page | 357 358 359 360 361 362 363 364 365 366 367 368  | Next Page >

  • Sensitive Data Storage - Best Practices

    - by Kenneth
    I recently started working on a personal project where I was connecting to a database using Java. This got me thinking. I have to provide the login information for a database account on the DB server in order to access the database. But if I hard code it in then it would be possible for someone to decompile the program and extract that login info. If I store it in an external setup file then the same problem exists only it would be even easier for them to get it. I could encrypt the data before storing it in either place but it seems like that's not really a fail safe either and I'm no encryption expert by any means. So what are some best practices for storing sensitive setup data for a program?

    Read the article

  • Best Practices around Oracle VM with RAC: RAC SIG webcast - Thursday, March 18th -

    - by adam.hawley
    The RAC SIG will be hosting an interesting webcast this Thursday, March 18th at 9am pacific time (5pm GMT) on: Best Practices around Oracle VM with RAC The adaptation of virtualization technology has risen tremendously and continues to grow due to the rapid change and growth rate of IT infrastructures. With this in mind, this seminar focuses on configuration best practices, examining how Oracle RAC scales & performs in a virtualized environment, and evaluating Oracle VM Server's ease of use. Roger Lopez from Dell IT will be presenting. This Week's Webcast Connection Info: ==================================== Webcast URL (use Internet Explorer): https://ouweb.webex.com/ouweb/k2/j.php?ED=134103137&UID=1106345812&RT=MiM0 Voice can either be heard via the webconference or via the following dial in: Participant Dial-In 877-671-0452 International Dial-In 706-634-9644 International Dial-In No Link http://www.intercall.com/national/oracleuniversity/gdnam.html Intercall Password 86336

    Read the article

  • Can't install kernel-uek-headers for currently running kernel

    - by haydenc2
    I have just created a VM in VMWare and installed a minimal install of Oracle Enterprise Linux 6.3. # cat /etc/oracle-release Oracle Linux Server release 6.3 It is running with the UEK kernel. # uname -r 2.6.39-200.24.1.el6uek.x86_64 When I try and install VMWare Tools, I get the following error. Searching for a valid kernel header path... The path "" is not a valid path to the 2.6.39-200.24.1.el6uek.x86_64 kernel headers. Would you like to change it? [yes] I have version 2.6.39 of the UEK installed, but the kernel-uek-headers are only 2.6.32. # yum list kernel-uek Installed Packages kernel-uek.x86_64 2.6.39-200.24.1.el6uek @anaconda-UEK2/6.3 kernel-uek.x86_64 2.6.39-200.29.3.el6uek @ol6_UEK_latest # yum list kernel-uek-headers Installed Packages kernel-uek-headers.x86_64 2.6.32-300.32.2.el6uek @ol6_latest And it appears that the headers for 2.6.39 aren't there. # yum list kernel-uek-headers --showduplicates Installed Packages kernel-uek-headers.x86_64 2.6.32-300.32.2.el6uek @ol6_latest Available Packages kernel-uek-headers.x86_64 2.6.32-100.28.5.el6 ol6_latest kernel-uek-headers.x86_64 2.6.32-100.28.9.el6 ol6_latest kernel-uek-headers.x86_64 2.6.32-100.28.11.el6 ol6_latest kernel-uek-headers.x86_64 2.6.32-100.28.15.el6 ol6_latest kernel-uek-headers.x86_64 2.6.32-100.28.17.el6 ol6_latest kernel-uek-headers.x86_64 2.6.32-100.34.1.el6uek ol6_latest kernel-uek-headers.x86_64 2.6.32-100.35.1.el6uek ol6_latest kernel-uek-headers.x86_64 2.6.32-100.36.1.el6uek ol6_latest kernel-uek-headers.x86_64 2.6.32-100.37.1.el6uek ol6_latest kernel-uek-headers.x86_64 2.6.32-200.16.1.el6uek ol6_latest kernel-uek-headers.x86_64 2.6.32-200.19.1.el6uek ol6_latest kernel-uek-headers.x86_64 2.6.32-200.20.1.el6uek ol6_latest kernel-uek-headers.x86_64 2.6.32-200.23.1.el6uek ol6_latest kernel-uek-headers.x86_64 2.6.32-300.3.1.el6uek ol6_latest kernel-uek-headers.x86_64 2.6.32-300.4.1.el6uek ol6_latest kernel-uek-headers.x86_64 2.6.32-300.7.1.el6uek ol6_latest kernel-uek-headers.x86_64 2.6.32-300.11.1.el6uek ol6_latest kernel-uek-headers.x86_64 2.6.32-300.20.1.el6uek ol6_latest kernel-uek-headers.x86_64 2.6.32-300.21.1.el6uek ol6_latest kernel-uek-headers.x86_64 2.6.32-300.24.1.el6uek ol6_latest kernel-uek-headers.x86_64 2.6.32-300.25.1.el6uek ol6_latest kernel-uek-headers.x86_64 2.6.32-300.27.1.el6uek ol6_latest kernel-uek-headers.x86_64 2.6.32-300.29.1.el6uek ol6_latest kernel-uek-headers.x86_64 2.6.32-300.29.2.el6uek ol6_latest kernel-uek-headers.x86_64 2.6.32-300.32.1.el6uek ol6_latest kernel-uek-headers.x86_64 2.6.32-300.32.2.el6uek ol6_latest The kernel for 2.6.32 is there. # yum list kernel-uek --showduplicates Installed Packages kernel-uek.x86_64 2.6.39-200.24.1.el6uek @anaconda-UEK2/6.3 kernel-uek.x86_64 2.6.39-200.29.3.el6uek @ol6_UEK_latest Available Packages kernel-uek.x86_64 2.6.32-100.28.5.el6 ol6_latest kernel-uek.x86_64 2.6.32-100.28.9.el6 ol6_latest kernel-uek.x86_64 2.6.32-100.28.11.el6 ol6_latest kernel-uek.x86_64 2.6.32-100.28.15.el6 ol6_latest kernel-uek.x86_64 2.6.32-100.28.17.el6 ol6_latest kernel-uek.x86_64 2.6.32-100.34.1.el6uek ol6_latest kernel-uek.x86_64 2.6.32-100.35.1.el6uek ol6_latest kernel-uek.x86_64 2.6.32-100.36.1.el6uek ol6_latest kernel-uek.x86_64 2.6.32-100.37.1.el6uek ol6_latest kernel-uek.x86_64 2.6.32-200.16.1.el6uek ol6_latest kernel-uek.x86_64 2.6.32-200.19.1.el6uek ol6_latest kernel-uek.x86_64 2.6.32-200.20.1.el6uek ol6_latest kernel-uek.x86_64 2.6.32-200.23.1.el6uek ol6_latest kernel-uek.x86_64 2.6.32-300.3.1.el6uek ol6_latest kernel-uek.x86_64 2.6.32-300.4.1.el6uek ol6_latest kernel-uek.x86_64 2.6.32-300.7.1.el6uek ol6_latest kernel-uek.x86_64 2.6.32-300.11.1.el6uek ol6_latest kernel-uek.x86_64 2.6.32-300.20.1.el6uek ol6_latest kernel-uek.x86_64 2.6.32-300.21.1.el6uek ol6_latest kernel-uek.x86_64 2.6.32-300.24.1.el6uek ol6_latest kernel-uek.x86_64 2.6.32-300.25.1.el6uek ol6_latest kernel-uek.x86_64 2.6.32-300.27.1.el6uek ol6_latest kernel-uek.x86_64 2.6.32-300.29.1.el6uek ol6_latest kernel-uek.x86_64 2.6.32-300.29.2.el6uek ol6_latest kernel-uek.x86_64 2.6.32-300.32.1.el6uek ol6_latest kernel-uek.x86_64 2.6.32-300.32.2.el6uek ol6_latest kernel-uek.x86_64 2.6.39-100.5.1.el6uek ol6_UEK_latest kernel-uek.x86_64 2.6.39-100.6.1.el6uek ol6_UEK_latest kernel-uek.x86_64 2.6.39-100.7.1.el6uek ol6_UEK_latest kernel-uek.x86_64 2.6.39-100.10.1.el6uek ol6_UEK_latest kernel-uek.x86_64 2.6.39-200.24.1.el6uek ol6_UEK_latest kernel-uek.x86_64 2.6.39-200.29.1.el6uek ol6_UEK_latest kernel-uek.x86_64 2.6.39-200.29.2.el6uek ol6_UEK_latest kernel-uek.x86_64 2.6.39-200.29.3.el6uek ol6_UEK_latest Should I downgrade the kernel to 2.6.32 so I can install VMWare tools? Is there another way to get the kernel-uek-headers package for the 2.6.39 UEK?

    Read the article

  • Wifi Direct P2P-go and P2P-client support in ubuntu

    - by giri
    I am trying to get wifi direct working on a laptop with ubuntu 12.04 When I type the command 'iw list' it doesn't show P2P-client or P2P-go under 'supported interface modes.From http://wireless.kernel.org/en/users/Documentation/iw/replace-iwconfig I found that 'iwconfig' was replaced with 'iw'.Does it mean that my hardware doesn't have support for wifi-direct or is it just the software(drivers) issue? If is it just sofware issue please help me to find the right drivers which support wifi-direct. Here is the info about my ethernet controller from the output of 'lspci' 02:00.0 Ethernet controller: Atheros Communication Inc. AR242x / AR542x Wreless Network Adapter (PCI-Express) (rev 01) 06:00.0 Ethernet controller: Marvell Technology Group Ltd. 88E8055 PCI-E Gigabit Ethernet Controller (rev 13)

    Read the article

  • My new DNS change works from America but not Sweden

    - by Dougie
    Several hours ago i changed nameserver and DNS info on one of my domains at my domainregistar. When i access the domain from my home computers and when my friends access the domain they get to the old ip-adress hosting the dead site(We all live in Sweden). But when i access the website from my mobile phone or through google.com/translate or North American proxies the website is shown like it should. Why is this?, does it take time for change to take effect for diffrent locations/countries? I find it very strange and would like to start using my site now. Do you think it will change or could i've been doing something wrong?

    Read the article

  • How to use Moonlight to play videos on rtlmost.hu?

    - by B. Roland
    Hi! In Hungary, the biggest TV channel is RTL Klub, they has a video archive site. They use Silverlight instead of Flash :( What is annoying, they use the lastest version of Silverlight, about 4.x. But Moonlight doesn't support it yet. I've been tried in Google Chrome (last dev version), and in Firefox (last stable version), and I've been used the both versions of Moonlight, the lastest stable, and the prerelease. The player loader is displayed, and loaded, but no player displayed after 30 mins waiting. If I want to swith to Ubuntu completly, how can I manage to play these videos? Thanks for your anwsers. Testvideo here. Debug info: Source: http://www.rtlklub.hu/most/player/soda/SodaMediaCenter.Player.Rtl.v3.5.xap Width: 555px Height: 490px Background: # RuntimeVersion: 4.0.50826.0 Windowless: no MaxFrameRate: 60 Codecs: ms-codecs Build configuration: debug, sanity checks

    Read the article

  • Social Network Updates: While You Were Busy Marketing 2

    - by Mike Stiles
    Since social moves at the speed of data, it’s already time for another update, as we did back in April, on the changes the various social networks have made or gone through while you were busy marketing. Facebook There’s a lot of talk Facebook’s developing a mobile product to act like Flipboard and surface news, from both users and media outlets. The biggest news was Facebook/Instagram’s introduction of 15-second videos, enhanced with with filters, to take some of Vine’s candy. You can also delete parts of videos and rerecord them, and there’s image stabilization. Facebook’s ad revenue is coming along just fine, thank you very much. 35% quarter-to-quarter growth in Q2. And it looks like new formats like Mobile App Install Ads and Unpublished Page Posts are adding to the mix. If you don’t already, you’ll soon see a little camera in comment boxes letting you insert photos right into the comments you make. The drive toward “more visual” continues. The other big news is Facebook’s adoption of our Twitter friend, the hashtag. Adding # sets apart the post topic so it can be easily found or discovered. It’s also being added to Google Plus, Tumblr, and Pinterest. Twitter Want to send someone a promoted tweet when they’re in range of your store? That could be happening by the end of this year. Some users have been seeing automatic in-stream previews of images on Twitter.com. Right now it’s images in your own tweets, but we can assume all tweets are next. Get your followers organized! Twitter raised the limit on the number of lists you can create from 20 to 1,000. They also raised the number of accounts you can have in a list from 500 to 5,000. Twitter started notifying you when someone favorites a tweet you’re mentioned in or re-tweets a tweet you re-tweeted. Anyway, it’s the first time Twitter’s notified you about indirect interactions like that. Who’s afraid of Instagram? A study shows 6-second Vine videos are being posted to Twitter at the rate of 9/second, up from 5/second 2 months ago. Vine has over 13 million users and branded Vines are 4x more likely to be shared than video ads. Google Plus Now featuring a 3-column redesigned stream, and images that take up a whole column. And photo filters Auto Highlight and Auto Awesome work to turn your photos into a real show. Google Hangouts is the workhorse for all Google messaging now, it’s not just an online chat with 9 people anymore. Google Plus Dashboard improves the connection between your company’s Google Plus business page and your Google Plus Local. Updates go out across all Google properties and you can do your managing from the dashboard. With Google Plus’ authorship system, you can build “Author Rank” based on what you write and put on the web. If your stuff is +1’ed and shared a lot, you’re the real deal and there are search result benefits. LinkedIn "Who's Viewed Your Updates" shows you what you’ve shared recently, who saw it and what they did about it in real-time. “Influencers” is, well, influential. Traffic to all LI news products has gone up 8x since it was introduced. LinkedIn is quickly figuring out how to get users to stick around awhile. You and your brand can post images and documents in status updates now. In fact, that whole “document posting” thing is making some analysts wonder if LinkedIn will drift on over to the Dropboxes and YouSendIts of the world. C’mon, admit it. Your favorite part of LinkedIn is being able to see who’s viewed your profile. Now you’ve got even more info and can see what/who you have in common. Premium users get even deeper insights about how people are finding them. If you’re a big fan of security, you’ll love that LinkedIn started offering two-factor authentication (2FA). It’s optional, but step 2 is a one-time code texted to your registered mobile. Pinterest A study showed pins have a looong shelf life compared to other social net posts. “Clicks kept coming for 30 days and beyond.” Most pins are timeless, and the infinite scroll causes people to see older pins. Is it a keeper? Pinterest jumped 82% to 54 million users in the past year. It’s valued at $2.5 billion and is one of the biggest sources of referral traffic there is. That said, CEO Ben Silbermann adds, "Right now, we don't make money." A new search feature stops you from having to endlessly scroll through your own pins looking for that waterfall picture you posted. Simply select “just my pins” in the search bar. New "Rich Pins" lets brands add info like price and availability to pins that can be updated daily via a data feed from your merchant site. Not so fast, you have to apply to Pinterest for it first. Like other social nets, Pinterest does not allow sexual content, nudity, or even partial nudity. However…some art contains nudity, and Pinterest wants to allow art. What constitutes “art” will be judged by…what we have to assume are Pinterest employees who love their job. @mikestilesPhoto: stock.xchng, Tim Marmon

    Read the article

  • Project Navigation and File Nesting in ASP.NET MVC Projects

    - by Rick Strahl
    More and more I’m finding myself getting lost in the files in some of my larger Web projects. There’s so much freaking content to deal with – HTML Views, several derived CSS pages, page level CSS, script libraries, application wide scripts and page specific script files etc. etc. Thankfully I use Resharper and the Ctrl-T Go to Anything which autocompletes you to any file, type, member rapidly. Awesome except when I forget – or when I’m not quite sure of the name of what I’m looking for. Project navigation is still important. Sometimes while working on a project I seem to have 30 or more files open and trying to locate another new file to open in the solution often ends up being a mental exercise – “where did I put that thing?” It’s those little hesitations that tend to get in the way of workflow frequently. To make things worse most NuGet packages for client side frameworks and scripts, dump stuff into folders that I generally don’t use. I’ve never been a fan of the ‘Content’ folder in MVC which is just an empty layer that doesn’t serve much of a purpose. It’s usually the first thing I nuke in every MVC project. To me the project root is where the actual content for a site goes – is there really a need to add another folder to force another path into every resource you use? It’s ugly and also inefficient as it adds additional bytes to every resource link you embed into a page. Alternatives I’ve been playing around with different folder layouts recently and found that moving my cheese around has actually made project navigation much easier. In this post I show a couple of things I’ve found useful and maybe you find some of these useful as well or at least get some ideas what can be changed to provide better project flow. The first thing I’ve been doing is add a root Code folder and putting all server code into that. I’m a big fan of treating the Web project root folder as my Web root folder so all content comes from the root without unneeded nesting like the Content folder. By moving all server code out of the root tree (except for Code) the root tree becomes a lot cleaner immediately as you remove Controllers, App_Start, Models etc. and move them underneath Code. Yes this adds another folder level for server code, but it leaves only code related things in one place that’s easier to jump back and forth in. Additionally I find myself doing a lot less with server side code these days, more with client side code so I want the server code separated from that. The root folder itself then serves as the root content folder. Specifically I have the Views folder below it, as well as the Css and Scripts folders which serve to hold only common libraries and global CSS and Scripts code. These days of building SPA style application, I also tend to have an App folder there where I keep my application specific JavaScript files, as well as HTML View templates for client SPA apps like Angular. Here’s an example of what this looks like in a relatively small project: The goal is to keep things that are related together, so I don’t end up jumping around so much in the solution to get to specific project items. The Code folder may irk some of you and hark back to the days of the App_Code folder in non Web-Application projects, but these days I find myself messing with a lot less server side code and much more with client side files – HTML, CSS and JavaScript. Generally I work on a single controller at a time – once that’s open it’s open that’s typically the only server code I work with regularily. Business logic lives in another project altogether, so other than the controller and maybe ViewModels there’s not a lot of code being accessed in the Code folder. So throwing that off the root and isolating seems like an easy win. Nesting Page specific content In a lot of my existing applications that are pure server side MVC application perhaps with some JavaScript associated with them , I tend to have page level javascript and css files. For these types of pages I actually prefer the local files stored in the same folder as the parent view. So typically I have a .css and .js files with the same name as the view in the same folder. This looks something like this: In order for this to work you have to also make a configuration change inside of the /Views/web.config file, as the Views folder is blocked with the BlockViewHandler that prohibits access to content from that folder. It’s easy to fix by changing the path from * to *.cshtml or *.vbhtml so that view retrieval is blocked:<system.webServer> <handlers> <remove name="BlockViewHandler"/> <add name="BlockViewHandler" path="*.cshtml" verb="*" preCondition="integratedMode" type="System.Web.HttpNotFoundHandler" /> </handlers> </system.webServer> With this in place, from inside of your Views you can then reference those same resources like this:<link href="~/Views/Admin/QuizPrognosisItems.css" rel="stylesheet" /> and<script src="~/Views/Admin/QuizPrognosisItems.js"></script> which works fine. JavaScript and CSS files in the Views folder deploy just like the .cshtml files do and can be referenced from this folder as well. Making this happen is not really as straightforward as it should be with just Visual Studio unfortunately, as there’s no easy way to get the file nesting from the VS IDE directly (you have to modify the .csproj file). However, Mads Kristensen has a nice Visual Studio Add-in that provides file nesting via a short cut menu option. Using this you can select each of the ‘child’ files and then nest them under a parent file. In the case above I select the .js and .css files and nest them underneath the .cshtml view. I was even toying with the idea of throwing the controller.cs files into the Views folder, but that’s maybe going a little too far :-) It would work however as Visual Studio doesn’t publish .cs files and the compiler doesn’t care where the files live. There are lots of options and if you think that would make life easier it’s another option to help group related things together. Are there any downside to this? Possibly – if you’re using automated minification/packaging tools like ASP.NET Bundling or Grunt/Gulp with Uglify, it becomes a little harder to group script and css files for minification as you may end up looking in multiple folders instead of a single folder. But – again that’s a one time configuration step that’s easily handled and much less intrusive then constantly having to search for files in your project. Client Side Folders The particular project shown above in the screen shots above is a traditional server side ASP.NET MVC application with most content rendered into server side Razor pages. There’s a fair amount of client side stuff happening on these pages as well – specifically several of these pages are self contained single page Angular applications that deal with 1 or maybe 2 separate views and the layout I’ve shown above really focuses on the server side aspect where there are Razor views with related script and css resources. For applications that are more client centric and have a lot more script and HTML template based content I tend to use the same layout for the server components, but the client side code can often be broken out differently. In SPA type applications I tend to follow the App folder approach where all the application pieces that make the SPA applications end up below the App folder. Here’s what that looks like for me – here this is an AngularJs project: In this case the App folder holds both the application specific js files, and the partial HTML views that get loaded into this single SPA page application. In this particular Angular SPA application that has controllers linked to particular partial views, I prefer to keep the script files that are associated with the views – Angular Js Controllers in this case – with the actual partials. Again I like the proximity of the view with the main code associated with the view, because 90% of the UI application code that gets written is handled between these two files. This approach works well, but only if controllers are fairly closely aligned with the partials. If you have many smaller sub-controllers or lots of directives where the alignment between views and code is more segmented this approach starts falling apart and you’ll probably be better off with separate folders in js folder. Following Angular conventions you’d have controllers/directives/services etc. folders. Please note that I’m not saying any of these ways are right or wrong  – this is just what has worked for me and why! Skipping Project Navigation altogether with Resharper I’ve talked a bit about project navigation in the project tree, which is a common way to navigate and which we all use at least some of the time, but if you use a tool like Resharper – which has Ctrl-T to jump to anything, you can quickly navigate with a shortcut key and autocomplete search. Here’s what Resharper’s jump to anything looks like: Resharper’s Goto Anything box lets you type and quick search over files, classes and members of the entire solution which is a very fast and powerful way to find what you’re looking for in your project, by passing the solution explorer altogether. As long as you remember to use (which I sometimes don’t) and you know what you’re looking for it’s by far the quickest way to find things in a project. It’s a shame that this sort of a simple search interface isn’t part of the native Visual Studio IDE. Work how you like to work Ultimately it all comes down to workflow and how you like to work, and what makes *you* more productive. Following pre-defined patterns is great for consistency, as long as they don’t get in the way you work. A lot of the default folder structures in Visual Studio for ASP.NET MVC were defined when things were done differently. These days we’re dealing with a lot more diverse project content than when ASP.NET MVC was originally introduced and project organization definitely is something that can get in the way if it doesn’t fit your workflow. So take a look and see what works well and what might benefit from organizing files differently. As so many things with ASP.NET, as things evolve and tend to get more complex I’ve found that I end up fighting some of the conventions. The good news is that you don’t have to follow the conventions and you have the freedom to do just about anything that works for you. Even though what I’ve shown here diverges from conventions, I don’t think anybody would stumble over these relatively minor changes and not immediately figure out where things live, even in larger projects. But nevertheless think long and hard before breaking those conventions – if there isn’t a good reason to break them or the changes don’t provide improved workflow then it’s not worth it. Break the rules, but only if there’s a quantifiable benefit. You may not agree with how I’ve chosen to divert from the standard project structures in this article, but maybe it gives you some ideas of how you can mix things up to make your existing project flow a little nicer and make it easier to navigate for your environment. © Rick Strahl, West Wind Technologies, 2005-2014Posted in ASP.NET  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • A new Bursting patch for EBS Customers.

    - by ashish.shrivastava
    Patch 8594771 for Bursting functionality was released last week. The patch is for EBS R12 and 11i Customers. Following bugs have been fixed against this patch. 9550733 - DELIVERYREQUEST DOES NOT ACCEPT THE COMM IN EMAIL ADDRESS ALIAS. 9483876 : XML PUBLISHER REPORT BURSTING PROGRAM NOT DEBUGGING AFTER PATCH 7352374 9433950 : JAVA.UTIL.ZIP.ZIPEXCEPTION: ZIP FILE MUST HAVE AT LEAST ONE ENTRY 9285648 : XML PUBLISHER REPORT BURSTING PROGRAM IGNORES PORT NUMBER FOR WEBDAV DELIVERY 9269498 : NOT ABLE TO GENERATE ALL TYPES OF OUTPUTS WHILE BURSTING WITH JDE INTEGRATION. 9110360 : BURSTING USING BACKGROUND AND TITLE TAGS IN BURSTING CONTROL FILE FAILS 8937963 : BURSTING - UNABLE TO USE SPECIFIC FILENAME MORE THAN ONCE CAUSING ZIPEXCEPTION 8871779 : THE CONCURRENT LOG DOES NOT CONTAIN INFO AFTER PATCH 7352374 8594771 - BURSTING DOES NOT SUPPORT ESCAPE CHARACTERS LIKE "FIRST LAST "

    Read the article

  • Microsoft SDE Interview vs Microsoft SDET Interview and Resources to Study

    - by vinayvasyani
    I have always heard that SDE interviews are much harder to crack than SDET. Is it really true? I have also heard that if candidate doesnt do well in SDE interview he is also sometimes offered SDET position. How much truth is there into these talks? I would highly appreciate if someone would put good resources and guidelines for how to prepare for Microsoft interviews..which books to read, which notes, online programming questions websites, etc. Give as much info as possible. Thanks in advance to everyone for your valuable help and contribution.

    Read the article

  • The Low Down Dirty Azure Blues

    - by SGWellens
    Remember the SETI screen savers that used to be on everyone's computer? As far I as know, it was the first bona-fide use of "Cloud" computing…albeit an ad hoc cloud. I still think it was a brilliant leveraging of computing power. My interest in clouds was re-piqued when I went to a technical seminar at the local .Net User Group. The speaker was Mike Benkovitch and he expounded magnificently on the virtues of the Azure platform. Mike always does a good job. One killer reason he gave for cloud computing is instant scalability. Not applicable for most applications, but it is there if needed. I have a bunch of files stored on Microsoft's SkyDrive platform which is cloud storage. It is painfully slow. Accessing a file means going through layers and layers of software, redirections and security. Am I complaining? Hell no! It's free! So my opinions of Cloud Computing are both skeptical and appreciative. What intrigued me at the seminar, in addition to its other features, is that Azure can serve as a web hosting platform. I have a client with an Asp.Net web site I developed who is not happy with the performance of their current hosting service. I checked the cost of Azure and since the site has low bandwidth/space requirements the cost would be competitive with the existing host provider: Azure Pricing Calculator. And, Azure has a three month free trial. Perfect! I could try moving the website and see how it works for free. I went through the signup process. Everything was proceeding fine until I went to the MS SQL database management screen. A popup window informed me that I needed to install Silverlight on my machine. Silverlight? No thanks. Buh-Bye. I half-heartedly found the Azure support button and logged a ticket telling them I didn't want Silverlight on my machine. Within 4 to 6 hours (and a myriad (5) of automated support emails) they sent me a link to a database management page that did not require Silverlight. Thanks! I was able to create a database immediately. One really nice feature was that after creating the database, I was given a list of connection strings. I went to the current host provider, made a backup of the database and saved it to my machine. I attached to the remote database using SQL Server Studio 2012 and looked for the Restore menu item. It was missing. So I tried using the SQL command: RESTORE DATABASE MyDatabase FROM DISK ='C:\temp\MyBackup.bak' Msg 40510, Level 16, State 1, Line 1 Statement 'RESTORE DATABASE' is not supported in this version of SQL Server. Are you kidding me? Why on earth…? This can't be happening! I opened both the source database and destination database in SQL Management Studio. I right clicked the source database, selected "Tasks" and noticed a menu selection called "Deploy Database to SQL Azure" Are you kidding me? Could it be? Oh yes, it be! There was a small problem because the database already existed on the Azure machine, I deployed to a new name, deleted the existing database and renamed the deployed database to what I needed. It was ridiculously easy. Being able to attach SQL Management Studio to remote databases is an awesome but scary feature. You can limit the IP addresses that can access the database which enhances security but when you give people, any people, me included, that much power, one errant mouse click could bring a live system down. My Advice: Tread softly and carry a large backup thumb-drive. Then I created a web site, the URL it returned look something like this: http://MyWebSite.azurewebsites.net/ Azure supports FTP, but I couldn't figure out the settings until I downloaded the publishing profile. It was an XML file that contained the needed information. I still couldn't connect with my FTP client (FileZilla). After about an hour of messing around, I deleted the port number from the FileZilla setup page….and voila, I was in like Flynn.   There are other options of deploying directly from Visual Studio, TFS, etc. but I do not like integrated tools that do things without my asking: It's usually hard to figure out what they did and how to undo it. I uploaded the aspx , cs , webconfig, etc. files. Bu it didn't run. The site I ported was in .NET 3.5. The Azure website configuration page gave me a choice between .NET 2.0 and 4.0. So, I switched to Visual Studio 2010, chose .NET 4.0 and upgraded the site. Of course I have the original version completely backed up and stored in a granite cave beneath the Nevada desert. And I have a backup CD under my pillow. The site uses ReportViewer to generate PDF documents. Of course it was the wrong version. I removed the old references to version 9 and added new references to version 10 (*see note below). Since the DLLs were not on the Azure Server, I uploaded them to the bin directory, crossed my fingers, burned some incense and gave it a try. After some fiddling around it ran. I don't know if I did anything particular to make it work or it just needed time to sort things out. However, one critical feature didn't work: ReportViewer could not programmatically generate PDF documents. I was getting this exception: "An error occurred during local report processing. Parameter is not valid." Rats. I did some searching and found other people were having the same problem, so I added a post saying I was having the same problem: http://social.msdn.microsoft.com/Forums/en-US/windowsazurewebsitespreview/thread/b4a6eb43-0013-435f-9d11-00ee26a8d017 Currently they are looking into this problem and I am waiting for the results. Hence I had the time to write this BLOG entry. How lucky you are. This was the last message I got from the Microsoft person: Hi Steve, Windows Azure Web Sites is a multi-tenant environment. For security issue, we limited some API calls. Unfortunately, some GDI APIS required by the PDF converting function are in this list. We have noticed this issue, and still investigation the best way to go. At this moment, there is no news to share. Sorry about this. Will keep you posted. If I had to guess, I would say they are concerned with people uploading images and doing intensive graphics programming which would hog CPU time.  But that is just a guess. Another problem. While trying to resolve the ReportViewer problem, I tried to write a file to the PDF directory to see if there was a permissions problem with some test code: String MyPath = MapPath(@"~\PDFs\Test.txt"); File.WriteAllText(MyPath, "Hello Azure");     I got this message: Access to the path <my path> is denied. After some research, I understood that since Azure is a cloud based platform, it can't allow web applications to save files to local directories. The application could be moved or replicated as scaling occurs and trying to manage local files would be problematic to say the least. There are other options: Use the Azure APIs to get a path. That way the location of the storage is separated from the application. However, the web site is then tied Azure and can't be moved to another hosting platform. Use the ApplicationData folder (not recommended). Write to BLOB storage. Or, I could try and stream the PDF output directly to the email and not save a file. I'm not going to work on a final solution until the ReportViewer is fixed. I am just sharing some of the things you need to be aware of if you decide to use Azure. I got this information from here. (Note the author of the BLOG added a comment saying he has updated his entry). Is my memory faulty? While getting this BLOG ready, I tried to write the test file again. And it worked. My memory is incorrect, or much more likely, something changed on the server…perhaps while they are trying to get ReportViewer to work. (Anyway, that's my story and I'm sticking to it). *Note: Since Visual Studio 2010 Express doesn't include a Report Editor, I downloaded and installed SQL Server Report Builder 2.0. It is a standalone Report Editor to replace the one not in Visual Studio 2010 Express. I hope someone finds this useful. Steve Wellens CodeProject

    Read the article

  • April 2013 Release of the Ajax Control Toolkit

    - by Stephen.Walther
    I’m excited to announce the April 2013 release of the Ajax Control Toolkit. For this release, we focused on improving two controls: the AjaxFileUpload and the MaskedEdit controls. You can download the latest release from CodePlex at http://AjaxControlToolkit.CodePlex.com or, better yet, you can execute the following NuGet command within Visual Studio 2010/2012: There are three builds of the Ajax Control Toolkit: .NET 3.5, .NET 4.0, and .NET 4.5. A Better AjaxFileUpload Control We completely rewrote the AjaxFileUpload control for this release. We had two primary goals. First, we wanted to support uploading really large files. In particular, we wanted to support uploading multi-gigabyte files such as video files or application files. Second, we wanted to support showing upload progress on as many browsers as possible. The previous version of the AjaxFileUpload could show upload progress when used with Google Chrome or Mozilla Firefox but not when used with Apple Safari or Microsoft Internet Explorer. The new version of the AjaxFileUpload control shows upload progress when used with any browser. Using the AjaxFileUpload Control Let me walk-through using the AjaxFileUpload in the most basic scenario. And then, in following sections, I can explain some of its more advanced features. Here’s how you can declare the AjaxFileUpload control in a page: <ajaxToolkit:ToolkitScriptManager runat="server" /> <ajaxToolkit:AjaxFileUpload ID="AjaxFileUpload1" AllowedFileTypes="mp4" OnUploadComplete="AjaxFileUpload1_UploadComplete" runat="server" /> The exact appearance of the AjaxFileUpload control depends on the features that a browser supports. In the case of Google Chrome, which supports drag-and-drop upload, here’s what the AjaxFileUpload looks like: Notice that the page above includes two Ajax Control Toolkit controls: the AjaxFileUpload and the ToolkitScriptManager control. You always need to include the ToolkitScriptManager with any page which uses Ajax Control Toolkit controls. The AjaxFileUpload control declared in the page above includes an event handler for its UploadComplete event. This event handler is declared in the code-behind page like this: protected void AjaxFileUpload1_UploadComplete(object sender, AjaxControlToolkit.AjaxFileUploadEventArgs e) { // Save uploaded file to App_Data folder AjaxFileUpload1.SaveAs(MapPath("~/App_Data/" + e.FileName)); } This method saves the uploaded file to your website’s App_Data folder. I’m assuming that you have an App_Data folder in your project – if you don’t have one then you need to create one or you will get an error. There is one more thing that you must do in order to get the AjaxFileUpload control to work. The AjaxFileUpload control relies on an HTTP Handler named AjaxFileUploadHandler.axd. You need to declare this handler in your application’s root web.config file like this: <configuration> <system.web> <compilation debug="true" targetFramework="4.5" /> <httpRuntime targetFramework="4.5" maxRequestLength="42949672" /> <httpHandlers> <add verb="*" path="AjaxFileUploadHandler.axd" type="AjaxControlToolkit.AjaxFileUploadHandler, AjaxControlToolkit"/> </httpHandlers> </system.web> <system.webServer> <validation validateIntegratedModeConfiguration="false"/> <handlers> <add name="AjaxFileUploadHandler" verb="*" path="AjaxFileUploadHandler.axd" type="AjaxControlToolkit.AjaxFileUploadHandler, AjaxControlToolkit"/> </handlers> <security> <requestFiltering> <requestLimits maxAllowedContentLength="4294967295"/> </requestFiltering> </security> </system.webServer> </configuration> Notice that the web.config file above also contains configuration settings for the maxRequestLength and maxAllowedContentLength. You need to assign large values to these configuration settings — as I did in the web.config file above — in order to accept large file uploads. Supporting Chunked File Uploads Because one of our primary goals with this release was support for large file uploads, we added support for client-side chunking. When you upload a file using a browser which fully supports the HTML5 File API — such as Google Chrome or Mozilla Firefox — then the file is uploaded in multiple chunks. You can see chunking in action by opening F12 Developer Tools in your browser and observing the Network tab: Notice that there is a crazy number of distinct post requests made (about 360 distinct requests for a 1 gigabyte file). Each post request looks like this: http://localhost:24338/AjaxFileUploadHandler.axd?contextKey={DA8BEDC8-B952-4d5d-8CC2-59FE922E2923}&fileId=B7CCE31C-6AB1-BB28-2940-49E0C9B81C64 &fileName=Sita_Sings_the_Blues_480p_2150kbps.mp4&chunked=true&firstChunk=false Each request posts another chunk of the file being uploaded. Notice that the request URL includes a chunked=true parameter which indicates that the browser is breaking the file being uploaded into multiple chunks. Showing Upload Progress on All Browsers The previous version of the AjaxFileUpload control could display upload progress only in the case of browsers which fully support the HTML5 File API. The new version of the AjaxFileUpload control can display upload progress in the case of all browsers. If a browser does not fully support the HTML5 File API then the browser polls the server every few seconds with an Ajax request to determine the percentage of the file that has been uploaded. This technique of displaying progress works with any browser which supports making Ajax requests. There is one catch. Be warned that this new feature only works with the .NET 4.0 and .NET 4.5 versions of the AjaxControlToolkit. To show upload progress, we are taking advantage of the new ASP.NET HttpRequest.GetBufferedInputStream() and HttpRequest.GetBufferlessInputStream() methods which are not supported by .NET 3.5. For example, here is what the Network tab looks like when you use the AjaxFileUpload with Microsoft Internet Explorer: Here’s what the requests in the Network tab look like: GET /WebForm1.aspx?contextKey={DA8BEDC8-B952-4d5d-8CC2-59FE922E2923}&poll=1&guid=9206FF94-76F9-B197-D1BC-EA9AD282806B HTTP/1.1 Notice that each request includes a poll=1 parameter. This parameter indicates that this is a polling request to get the size of the file buffered on the server. Here’s what the response body of a request looks like when about 20% of a file has been uploaded: Buffering to a Temporary File When you upload a file using the AjaxFileUpload control, the file upload is buffered to a temporary file located at Path.GetTempPath(). When you call the SaveAs() method, as we did in the sample page above, the temporary file is copied to a new file and then the temporary file is deleted. If you don’t call the SaveAs() method, then you must ensure that the temporary file gets deleted yourself. For example, if you want to save the file to a database then you will never call the SaveAs() method and you are responsible for deleting the file. The easiest way to delete the temporary file is to call the AjaxFileUploadEventArgs.DeleteTemporaryData() method in the UploadComplete handler: protected void AjaxFileUpload1_UploadComplete(object sender, AjaxControlToolkit.AjaxFileUploadEventArgs e) { // Save uploaded file to a database table e.DeleteTemporaryData(); } You also can call the static AjaxFileUpload.CleanAllTemporaryData() method to delete all temporary data and not only the temporary data related to the current file upload. For example, you might want to call this method on application start to ensure that all temporary data is removed whenever your application restarts. A Better MaskedEdit Extender This release of the Ajax Control Toolkit contains bug fixes for the top-voted issues related to the MaskedEdit control. We closed over 25 MaskedEdit issues. Here is a complete list of the issues addressed with this release: · 17302 MaskedEditExtender MaskType=Date, Mask=99/99/99 Undefined JS Error · 11758 MaskedEdit causes error in JScript when working with 2-digits year · 18810 Maskededitextender/validator Date validation issue · 23236 MaskEditValidator does not work with date input using format dd/mm/yyyy · 23042 Webkit based browsers (Safari, Chrome) and MaskedEditExtender · 26685 MaskedEditExtender@(ClearMaskOnLostFocus=false) adds a zero character when you each focused to target textbox · 16109 MaskedEditExtender: Negative amount, followed by decimal, sets value to positive · 11522 MaskEditExtender of AjaxtoolKit-1.0.10618.0 does not work properly for Hungarian Culture · 25988 MaskedEditExtender – CultureName (HU-hu) > DateSeparator · 23221 MaskedEditExtender date separator problem · 15233 Day and month swap in Dynamic user control · 15492 MaskedEditExtender with ClearMaskOnLostFocus and with MaskedEditValidator with ClientValidationFunction · 9389 MaskedEditValidator – when on no entry · 11392 MaskedEdit Number format messed up · 11819 MaskedEditExtender erases all values beyond first comma separtor · 13423 MaskedEdit(Extender/Validator) combo problem · 16111 MaskedEditValidator cannot validate date with DayMonthYear in UserDateFormat of MaskedEditExtender · 10901 MaskedEdit: The months and date fields swap values when you hit submit if UserDateFormat is set. · 15190 MaskedEditValidator can’t make use of MaskedEditExtender’s UserDateFormat property · 13898 MaskedEdit Extender with custom date type mask gives javascript error · 14692 MaskedEdit error in “yy/MM/dd” format. · 16186 MaskedEditExtender does not handle century properly in a date mask · 26456 MaskedEditBehavior. ConvFmtTime : function(input,loadFirst) fails if this._CultureAMPMPlaceholder == “” · 21474 Error on MaskedEditExtender working with number format · 23023 MaskedEditExtender’s ClearMaskOnLostFocus property causes problems for MaskedEditValidator when set to false · 13656 MaskedEditValidator Min/Max Date value issue Conclusion This latest release of the Ajax Control Toolkit required many hours of work by a team of talented developers. I want to thank the members of the Superexpert team for the long hours which they put into this release.

    Read the article

  • How could I install Ubuntu One on KDE and use it with dolphin?

    - by tobiasBora
    I see in some topics that to install ubuntu-one, I've to execute these commands : sudo add-apt-repository ppa:apachelogger/ubuntuone-kde sudo apt-get update sudo apt-get install ubuntuone-kde However, it don't work (Is ppa:apachelogger/ubuntuone-kde closed ?) : sudo add-apt-repository ppa:apachelogger/ubuntuone-kde You are about to add the following PPA to your system: ubuntuone-kde tag:launchpad.net:2008:redacted More info: https://launchpad.net/~apachelogger/+archive/ubuntuone-kde Press [ENTER] to continue or ctrl-c to cancel adding it Executing: gpg --ignore-time-conflict --no-options --no-default-keyring --secret-keyring /tmp/tmp.UJpbwWvbov --trustdb-name /etc/apt/trustdb.gpg --keyring /etc/apt/trusted.gpg --primary-keyring /etc/apt/trusted.gpg --keyserver hkp://keyserver.ubuntu.com:80/ --recv tag:launchpad.net:2008:redacted gpg: « tag:launchpad.net:2008:redacted » n'est pas une ID de clé: ignoré How could I install ubunto-one on KDE and use it with dolphin ?

    Read the article

  • "Virtual Machine Manager" and "Virtual Machine Server" setup manual

    - by urtihu
    Is there a manual available that covers the proper setup of a "Virtual Machine Server" with no GUI with an Ubuntu Workstation with a GUI and "Virtual Machine Manager" installed? Both are 12.04 version. I get the following error message: unable to connect to libvirt Verify that -The libvirt-bin package is installed -The libvirt daemon has been started -you are a member of the libvirtd group the package is installed for some reason starting the daemon seems to crash libvirtd start info: libvirt version 0.9.8 error: virExecWithHook:328 : cannot find 'pm-is-supported' in path: No such file or directory also qemucapsInit:856: Failed to get host power management capabilities So I guess I did not set the server up correctly. All manuals I found do not mention "Virtual Machine Manager". I only chose the packages to connect with SSH remotely and the "Virtual Machine Server" for the server installation. So I would like to find a manual that covers this combo or then covered only GUI machines that have both on the same machine, which will not really help with system performance as a hypervisor.

    Read the article

  • Getting WCF Services in a Silverlight solution to play nice on deployment

    - by brendonpage
    I have come across 2 issues with deploying WCF services in a Silverlight solution, admittedly the one is more of a hiccup, and only occurs if you take the easy way out and reference your services through visual studio. The First Issue This occurs when you deploy your WFC services to an IIS server. When browse to the services using your web browser, you are greeted with “This collection already contains an address with scheme http.  There can be at most one address per scheme in this collection.”. When you make a call to this service from your Silverlight application, you get the extremely helpful “NotFound” error, this error message can be found in the error property of the event arguments on the complete event handler for that call. As it did with me this will leave most people scratching their head, because the very same services work just fine on the ASP.NET Development Web Server and on my local IIS server. Now I’m no server/hosting/IIS expert so I did a bit of searching when I first encountered this issue. I found out this happens because IIS supports multiple address bindings per protocol (http/https/ftp … etc) per web site, but WCF only supports binding to one address per protocol. This causes a problem when the WCF service is hosted on a site with multiple address bindings, because IIS provides all of the bindings to the host factory when running the service. While this problem occurs mainly on shared hosting solutions, it is not limited to shared hosting, it just seems like all shared hosting providers setup sites on their servers with multiple address bindings. For interests sake I added functionality to the example project attached to this post to dump the addresses given to the WCF service by IIS into a log file. This was the output on the shared hosting solution I use: http://mydomain.co.za/Services/TestService.svc http://www.mydomain.co.za/Services/TestService.svc http://mydomain-co-za.win13.wadns.net/Services/TestService.svc http://win13/Services/TestService.svc As you can see all these addresses are for the http protocol, which is where it all goes wrong for WCF. Fixes for the First Issue There are a few ways to get around this. The first being the easiest, target .NET 4! Yes that's right in .NET 4 WCF services support multiple addresses per protocol. This functionality is enabled by an option, which is on by default if you create a new project, you will need to turn on if you are upgrading to .NET 4. To do this set the multipleSiteBindingsEnabled property of the serviceHostingEnviroment tag in the web.config file to true, as shown below: <system.serviceModel>     <serviceHostingEnvironment multipleSiteBindingsEnabled="true" /> </system.serviceModel> Beware this ONLY works in .NET 4, so if you don’t have a server with .NET 4 installed on that you can deploy to, you will need to employ one of the other work a rounds. The second option will work for .NET 3.5 & 4. For this option all you need to do is modify the web.config file and add baseAddressPrefixFilters to the serviceHostingEnviroment tag as shown below: <system.serviceModel>     <serviceHostingEnvironment>         <baseAddressPrefixFilters>              <add prefix="http://www.mydomain.co.za"/>         </baseAddressPrefixFilters>     </serviceHostingEnvironment> </system.serviceModel> These will be used to filter the list of base addresses that IIS provides to the host factory. When specifying these prefix filters be sure to specify filters which will only allow 1 result through, otherwise the entire exercise will be pointless. There is however a problem with this work a round, you are only allowed to specify 1 prefix filter per protocol. Which means you can’t add filters for all your environments, this will therefore add to the list of things to do before deploying or switching dev machines. The third option is the one I currently employ, it will work for .NET 3, 3.5 & 4, although it is not needed for .NET 4. For this option you create a custom host factory which inherits from the ServiceHostFactory class. In the implementation of the ServiceHostFactory you employ logic to figure out which of the base addresses, that are give by IIS, to use when creating the service host. The logic you use to do this is completely up to you, I have seen quite a few solutions that simply statically reference an index from the list of base addresses, this works for most situations but falls short in others. For instance, if the order of the base addresses where to change, it might end up returning an address that only resolves on the servers local network, like the last one in the example I gave at the beginning. Another instance, if a request comes in on a different protocol, like https, you will be creating the service host using an address which is on the incorrect protocol, like http. To reliably find the correct address to use, I use the address that the service was requested on. To accomplish this I use the HttpContext, which requires the service to operate with AspNetCompatibilityRequirements set on. If for some reason running you services with AspNetCompatibilityRequirements on isn’t an option, you can still use this method, you will just have to come up with your own logic for selecting the correct address. First you will need to enable AspNetCompatibilityRequirements for your hosting environment, to do this you will need to set it to true in the web.config file as shown below: <system.serviceModel>     <serviceHostingEnvironment AspNetCompatibilityRequirements="true" /> </system.serviceModel> You will then need to mark any services that are going to use the custom host factory, to allow AspNetCompatibilityRequirements, as shown below: [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class TestService { } Now for the custom host factory, this is where the logic lives that selects the correct address to create service host with. The one i use is shown below: public class CustomHostFactory : ServiceHostFactory { protected override ServiceHost CreateServiceHost(Type serviceType, Uri[] baseAddresses) { // // Compose a prefix filter based on the requested uri // string prefixFilter = HttpContext.Current.Request.Url.Scheme + "://" + HttpContext.Current.Request.Url.DnsSafeHost; if (!HttpContext.Current.Request.Url.IsDefaultPort) { prefixFilter += ":" + HttpContext.Current.Request.Url.Port.ToString() + "/"; } // // Find a base address that matches the prefix filter // foreach (Uri baseAddress in baseAddresses) { if (baseAddress.OriginalString.StartsWith(prefixFilter)) { return new ServiceHost(serviceType, baseAddress); } } // // Throw exception if no matching base address was found // throw new Exception("Custom Host Factory: No base address matching '" + prefixFilter + "' was found."); } } The most important line in the custom host factory is the one that returns a new service host. This has to return a service host that specifies only one base address per protocol. Since I filter by the address the request came on in, I only need to create the service host with one address, since this address will always be of the correct protocol. Now you have a custom host factory you have to tell your services to use it. To do this you view the markup of the service by right clicking on it in the solution explorer and choosing “View Markup”. Then you add/set the value of the Factory property to the full namespace path of you custom host factory, as shown below. And that is it done, the service will now use the specified custom host factory. The Second Issue As I mentioned earlier this issue is more of a hiccup, but I thought worthy of a mention so I included it. This issue only occurs when you add a service reference to a Silverlight project. Visual Studio will generate a lot of code for you, part of that generated code is the ServiceReferences.ClientConfig file. This file stores the endpoint configuration that is used when accessing your services using the generated proxy classes. Here is what that file looks like: <configuration>     <system.serviceModel>         <bindings>             <customBinding>                 <binding name="CustomBinding_TestService">                     <binaryMessageEncoding />                     <httpTransport maxReceivedMessageSize="2147483647" maxBufferSize="2147483647" />                 </binding>                 <binding name="CustomBinding_BrokenService">                     <binaryMessageEncoding />                     <httpTransport maxReceivedMessageSize="2147483647" maxBufferSize="2147483647" />                 </binding>             </customBinding>         </bindings>         <client>             <endpoint address="http://localhost:49347/services/TestService.svc"                 binding="customBinding" bindingConfiguration="CustomBinding_TestService"                 contract="TestService.TestService" name="CustomBinding_TestService" />             <endpoint address="http://localhost:49347/Services/BrokenService.svc"                 binding="customBinding" bindingConfiguration="CustomBinding_BrokenService"                 contract="BrokenService.BrokenService" name="CustomBinding_BrokenService" />         </client>     </system.serviceModel> </configuration> As you will notice the addresses for the end points are set to the addresses of the services you added the service references from, so unless you are adding the service references from your live services, you will have to change these addresses before you deploy. This is little more than an annoyance really, but it adds to the list of things to do before you can deploy, and if left unchecked that list can get out of control. Fix for the Second Issue The way you would usually access a service added this way is to create an instance of the proxy class like so: BrokenServiceClient proxy = new BrokenServiceClient(); Closer inspection of these generated proxy classes reveals that there are a few overloaded constructors, one of which allows you to specify the end point address to use when creating the proxy. From here all you have to do is come up with some logic that will provide you with the relative path to your services. Since my WCF services are usually hosted in the same project as my Silverlight app I use the class shown below: public class ServiceProxyHelper { /// <summary> /// Create a broken service proxy /// </summary> /// <returns>A broken service proxy</returns> public static BrokenServiceClient CreateBrokenServiceProxy() { Uri address = new Uri(Application.Current.Host.Source, "../Services/BrokenService.svc"); return new BrokenServiceClient("CustomBinding_BrokenService", address.AbsoluteUri); } } Then I will create an instance of the proxy class using my service helper class like so: BrokenServiceClient proxy = ServiceProxyHelper.CreateBrokenServiceProxy(); The way this works is “Application.Current.Host.Source” will return the URL to the ClientBin folder the Silverlight app is hosted in, the “../Services/BrokenService.svc” is then used as the relative path to the service from the ClientBin folder, combined by the Uri object this gives me the URL to my service. The “CustomBinding_BrokenService” is a reference to the end point configuration in the ServiceReferences.ClientConfig file. Yes this means you still need the ServiceReferences.ClientConfig file. All this is doing is using a different end point address than the one specified in the ServiceReferences.ClientConfig file, all the other settings form the ServiceReferences.ClientConfig file are still used when creating the proxy. I have uploaded an example project which covers the custom host factory solution from the first issue and everything from the second issue. I included the code to write a list of base addresses to a log file in my implementation of the custom host factory, this is not need for the custom host factory to function and can safely be removed. Download (WCFServicesDeploymentExample.zip)

    Read the article

  • 409 CONFLICT : MAAS

    - by amir beygi
    I have some problem with my MAAS. juju bootstrap result: 2012-08-31 03:59:17,721 INFO Bootstrapping environment 'maas' (origin: distro type: maas)... Unexpected Error interacting with provider: 409 CONFLICT 2012-08-31 03:59:17,951 ERROR Unexpected Error interacting with provider: 409 CONFLICT Also i have 3 nodes in Commissioning status (delete node is disable and no start button) , DHCP seems working because LAN boot is working but boot but ends with : ALERT! /dev/disk/by-label/cloudimg-rootfs does not exist. Dropping to a shell! BusyBox.... (initramfs)

    Read the article

  • Unlock the Full Value of Oracle CRM On Demand

    - by charles.knapp
    Register for this live Oracle CRM On Demand Virtual Community Session! Oracle CRM On Demand delivers the most complete On Demand CRM available anywhere. But how can you ensure you are getting maximum value from the many powerful features that Oracle CRM On Demand offers? Join our interactive Oracle CRM On Demand Virtual Community Session on Tuesday, January 11, 2011 from 10.00-11 a.m. PT / 1 p.m.-2 p.m. ET to get expert advice and discuss the best ways to unlock the full potential of Oracle CRM Demand with Mike Lairson, author of 'Oracle CRM On Demand Reporting'. Book OfferSend your Oracle CRM On Demand configuration ideas before the Webcast to [email protected] and you could win a free copy of 'Oracle CRM On Demand Reporting' by Mike Lairson. Learn more and register now!

    Read the article

  • NRF Online Merchandising Workshop: Where Online Retailers Are Focusing for Holiday and Beyond

    - by Rose Spicer-Oracle
    0 0 1 1204 6863 Oracle Corporation 57 16 8051 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Last month we attended the NRF Online Merchandising Workshop in LA, and it was a great opportunity to catch up with our customers, meet new retailers, and hear some great presentations from VF Corporation, Zazzle, Julep Beauty, Backcountry, eBags and more. The one-on-one conversations with Merchants and the keynote presentations carry the same themes across companies of all sizes and across verticals. With only 125 days left (and counting) until Black Friday, these conversations provided some great insight in to what’s top of mind for retailers during the most stressful time of their year, and a sneak peek in to what they will deliver this holiday season.  Some of the most popular topics were: When to start promoting for holiday: seems like a funny conversation to have in July, but a number of retailers said they already had their holiday shopping gift guides live on their site, and it was attracting a significant portion of their onsite traffic. When it comes to timing, most retailers were questioning when to begin their holiday promotions -- carefully balancing when to release pricing and specials, and knowing that customers are holding out for last-minute deals and price drops. Many retailers noted the frustrations around transparent pricing by Amazon and a few other mega-retailers last year, publishing their “lowest prices of the season” as early as October – ensuring shoppers that those prices were the best they could get all season long. Many retailers felt their hands were forced to drop prices. Others kept their set pricing with negative customer reaction, causing some to miss their holiday goals. The pressure is on, and most retailers identified November 1 as their target start date for the holiday promotions blitz. Some are even waiting for the big guys to release their “lowest prices of the season” guides and will then follow suit.      Attribution is tough – and a huge focus: understanding the path to conversion is a tough nut to crack, especially in the new omnichannel world where consumers use multiple touchpoints to make a single purchase, and internal management wants to know hard data. This has lead many retailers to invest in attribution; carefully tracking their online marketing efforts to determine what gets “credit” for the sale, instead of giving credit to the “last click.” Retailers noted that it is very difficult to determine the numbers when online and offline worlds collide – like when a shopper uses digital channels for research and then makes a purchase in a store. As one of the presenters from The North Face mentioned in her keynote, a key to enabling better customer service and satisfaction when it comes to converged online and offline sales is training the in-store staff, and creating a culture where it eventually “doesn’t matter what group gets the credit” if they all add to the sale. No doubt, the area of attribution will be a big area of retail investment in the coming years.      How to plan for the converged world: planning to ensure inventory gets where it needs to be was another concern. In conversations with retailers, we advised them to analyze customer patterns: where shoppers purchase items, where the items were sourced from and even where items are returned. This analysis is very valuable in determining inventory plans. From there, retailers can more accurately plan and allocate inventory to support both the online and offline customer behavior. As we head into the holiday season, the need for accurate enterprise-wide inventory visibility, and providing that information to associates, is even more critical to the brand-wide customer experience.       Improving the search / navigation / usability of the site(s): Aside from some of the big ideas and standard holiday pricing pressure, most conversations we had centered around continuing to improve the basics of the site. Reinvesting in search and navigation came up time and time again (FitForCommerce blogged about what a big topic it was at the event as well). Obviously getting shoppers on their path quickly and allowing them to find what they need fast is critical, but it was definitely interesting to hear just how much effort is still going in to honing the search and navigation experience. Adding new elements to search and navigation like typeahed, inventive navigation refinements, and new navigation categories like gift guides, specialized boutiques and flash sales were top of mind, in addition to searchandising and making search-driven product recommendations. (Oracle can help!)       Reducing cart abandonment: always a hot topic that is top of mind for every online retailer. Getting shoppers to the cart is often less then half the battle; getting them to click “buy” and complete the transaction is much more difficult. While retailers carefully study the checkout process and where shoppers tend to bounce, they know that how they design their checkout page is critical. We’re all online shoppers in our personal lives and we know how frustrating it can be when total prices are not transparent (i.e. shipping, processing, taxes is not included until the very last possible screen before clicking that buy button). Online retailers are struggling with where in the checkout process to surface the total price to be charged to reduce cart abandonment, while not showing the total figure too early in the process that it keeps shoppers from getting to checkout altogether. Recent research shows that providing total pricing prior to the checkout process dramatically reduces cart abandonment – as it serves as a filter to those shopping within a specific price band. Much of the cart abandonment discussion leads us to…       The free shipping / free returns question: it’s no secret that because of Amazon and programs like Prime, consumers expect free shipping, much to the chagrin of the smaller retailer. The reality is that if you’re not a mega-retailer, shipping is an expensive part of doing business that doesn’t allow most retailers to keep their prices low and offer free shipping. This has many retailers venturing out on the “free returns” path, especially in apparel. A number of retailers we spoke with are testing a flat rate shipping fee with free returns to see if they can crack the price threshold where shoppers are willing to pay for shipping with an added service. But, free shipping remains king.      Social ads and retargeting: they are working, but do they turn off consumers? That’s the big question. Every retailer we spoke with during a roundtable on the topic said that social ads and retargeting (where that pair of boots you’re been eyeing on a site magically follows you around the Internet) work and are meeting campaign goals. The larger question many retailers are asking is if this type of tactic is turning off a large number of shoppers, even if these campaigns are meeting their early goals. Retailers also mentioned that Facebook ads are working very well for them, especially when it comes to new customer acquisition, serving as a complimentary a channel to SEO when it comes to engaging new customers. While there are always new things to experiment with in retail, standard challenges are top of mind as retailers scramble to get ready for holiday. It will undoubtedly be another record-breaking online shopping season, but as retailers get more and more advanced with each Black Friday, expect some exciting things. This excitement needs to be backed by sound solutions and optimized operations. Then again, consumers are expecting more than ever, so I don’t doubt that retailers are already thinking about the possibilities of holiday 2015… and beyond. Customers who read this article, also found value in the following stories: Personalization for Retail: http://blogs.oracle.com/retail/entry/personalization_for_retailShop Direct User Experience Focus Drives Sales:https://blogs.oracle.com/retail/entry/shop_direct_user_experience_focusMaking Waves: Australian Online Retailer SurfStitch: https://blogs.oracle.com/oracleretail/entry/surf_stitchWhat’s new in Oracle Commerce v11.1 for RetailWhat the Content+Commerce Equation is Missing

    Read the article

  • All my emails to Yahoo!, Hotmail and AOL are going to Spam, though I've implemented every validation

    - by Chetan
    Hi, I've implemented everything and checked everything (SPF, DomainKey, DKIM, reverse lookup), and only Gmail is allowing my emails to go to Inbox. Yahoo, Hotmail and AOL are all sending my messages to Spam. What am I doing wrong? Please help! Following are the headers of messages to Yahoo, Hotmail and AOL. I've changed names and domain names. The domain names I'm sending mail from are polluxapp.com and gemini.polluxapp.com. Yahoo: From Shift Licensing Tue Jan 26 21:55:14 2010 X-Apparently-To: [email protected] via 98.136.167.163; Tue, 26 Jan 2010 13:59:12 -0800 Return-Path: X-YahooFilteredBulk: 208.115.108.162 X-YMailISG: gPlFT1YWLDtTsHSCXAO2fxuGq5RdrsMxPffmkJFHiQyZW.2RGdDQ8OEpzWDYPS.MS_D5mvpu928sYN_86mQ2inD9zVLaVNyVVrmzIFCOHJO2gPwIG8c2L8WajG4ZRgoTwMFHkyEsefYtRLMg8AmHKnkS0PkPscwpVHtuUD91ghsTSqs4lxEMqhqw60US0cwMn_r_DrWNEUg_sESZsYeZpJcCCPL0wd6zcfKmtYaIkidsth3gWJPJgpwWtkgPvwsJUU_cmAQ8hAQ7RVM1usEs80PzihTLDR1yKc4RJCsesaf4NUO_yN1cPsbFyiaazKikC.eiQk4Z3VU.8O5Vd8i7mPNyOeAjyt7IgeA_ X-Originating-IP: [208.115.108.162] Authentication-Results: mta1035.mail.sk1.yahoo.com from=example.com; domainkeys=pass (ok); from=example.com; dkim=permerror (bad sig) Received: from 127.0.0.1 (EHLO gemini.example.com) (208.115.108.162) by mta1035.mail.sk1.yahoo.com with SMTP; Tue, 26 Jan 2010 13:59:12 -0800 Received: from gemini.example.com (gemini [127.0.0.1]) by gemini.example.com (Postfix) with ESMTP id 3984E21A0167 for ; Tue, 26 Jan 2010 13:55:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=example.com; h=to :subject:from:content-type:message-id:date; s=mail; bh=bRIHfxE3S e+YeCrIOqziZsiESJA=; b=J+D56Czff+6wGjQycLEvHyT32+06Nngf+6h7Ep6DL SmmJv3ihiAFJIJiPxiwLNpUsOSHhwJYjYQtynbBnag40A6EUBIsucDR+VoEYD+Cc 9L0dV3QD5D77VpG9PnRQDQa91R+NPIt5og9xbYfUWJ1b/jXkZopb0VTM+H9tandM 24= DomainKey-Signature: a=rsa-sha1; c=nofws; d=example.com; h=to:subject :from:content-type:message-id:date; q=dns; s=mail; b=pO5YvvjGTXs 3Qa83Ibq9woLq5VSsxUD5uoSrjNrW9ICMmdWyJpb9oT5byFR9hMthomTmfGWkkh6 3VxtD0hb0HVonN+1iheqJ9QBBOctadLCAOPZV3mfA99XUu7Y0DR2qtkU/UkSe8In 5PENWFbwub88ZsRDiW3hCbNHl+UO8Jsc= Received: by gemini.example.com (Postfix, from userid 502) id 386DE21A0166; Tue, 26 Jan 2010 13:55:14 -0800 (PST) To: [email protected] Subject: Shift License For James Xavier From: "Shift Licensing" Content-type: text/html Message-Id: <[email protected] Date: Tue, 26 Jan 2010 13:55:14 -0800 (PST) Content-Length: 282` Hotmail: X-Message-Delivery: Vj0xLjE7dXM9MDtsPTA7YT0wO0Q9MjtTQ0w9Ng== X-Message-Status: n:0 X-SID-PRA: [email protected] X-AUTH-Result: NONE X-Message-Info: 6sSXyD95QpWzUBaRfzf3NMbaiSGCCYGXSczlzLw49r01I25elu3oYM0V2uNa8BV2O7DOiFEeewTBKMtN+PW+ig== Received: from gemini.example.com ([208.115.108.162]) by snt0-mc4-f7.Snt0.hotmail.com with Microsoft SMTPSVC(6.0.3790.3959); Tue, 26 Jan 2010 13:18:53 -0800 Received: from gemini.example.com (gemini [127.0.0.1]) by gemini.example.com (Postfix) with ESMTP id 9431321A0167 for ; Tue, 26 Jan 2010 13:18:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=gemini.example.com; h=to :subject:message-id:date:from; s=mail; bh=DLF0k+uELpY6If5o3SWlSj 7j0vw=; b=nAMpb47xTVh73y6a2rf6V1rtYHuufr46dtuwWtHyFC85QKfZJReJJL oFIPjgEC28/1wSdy8VbfLG1g64W1hvnJjet3rcyv3ANNYxnFaiH5yt3SDEiLxydS gjCmNcZXyiVsWtpv7atVRO/t/Own+oFB9zz/9mj43Bhm4bnZ2cTno= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gemini.example.com; h=to :subject:message-id:date:from; q=dns; s=mail; b=sFpNxlskyz4MYT38 BA/rQ6ZAcQjhy7STkLPckrCDVVZcE4/zukHyARq7guMtYCCEjXoIbVEtNikPC97F cGpJGGZrppTGjx62N0flxG8hvwejiJYnUJF1EIP4JckGWyEI+21vtWLLQ27eegtN fs9OkIQ2iUPC/4u8N1eqiff0VZU= Received: by gemini.example.com (Postfix, from userid 504) id 8ED7221A0166; Tue, 26 Jan 2010 13:18:53 -0800 (PST) To: [email protected] Subject: Testing this Message-Id: <[email protected] Date: Tue, 26 Jan 2010 13:18:53 -0800 (PST) From: [email protected] Return-Path: [email protected] X-OriginalArrivalTime: 26 Jan 2010 21:18:54.0039 (UTC) FILETIME=[29CEE670:01CA9ECD] AOL: X-AOL-UID: 3158.1902377530 X-AOL-DATE: Tue, 26 Jan 2010 5:07:23 PM Eastern Standard Time Return-Path: Received: from rly-mg06.mx.aol.com (rly-mg06.mail.aol.com [172.20.83.112]) by air-mg06.mail.aol.com (v126.13) with ESMTP id MAILINMG061-a1d4b5f6787a4; Tue, 26 Jan 2010 17:07:22 -0500 Received: from gemini.example.com (gemini.example.com [208.115.108.162]) by rly-mg06.mx.aol.com (v125.7) with ESMTP id MAILRELAYINMG067-a1d4b5f6787a4; Tue, 26 Jan 2010 17:07:04 -0500 Received: from gemini.example.com (gemini [127.0.0.1]) by gemini.example.com (Postfix) with ESMTP id 32B3821A0167 for ; Tue, 26 Jan 2010 14:07:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=gemini.example.com; h=to :subject:message-id:date:from; s=mail; bh=RL0GLHd3dZ8IlIHoHIhA/U cLtUE=; b=BKg4p3qnaIdFRjAbvUa+Hwcyc6W91v4B4hN95dVymJrxyUBycWMUSC nzKmJ5QllhCYjwO+S7GrRdmlFpjBaK8kt2qmdCyC2UuiDF6xY6MXx/DBF56QpYtZ YDY4kXdiEMSbooH14B4CCPhaCTdC1wCtV0diat3EANCLxSDYAYq5k= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gemini.example.com; h=to :subject:message-id:date:from; q=dns; s=mail; b=fDSjNpfWs7TfGXda uio8qbJIyD+UmPL+C0GM1VeeV8FADj6JiYIT1nT3iBwSHlrLFCJ1wxPbE4d9CGl8 gQkPIV6T4TL7ha052nur0EOWoBLoBAOmhTshF/gsIY+/KMibbIczuRyTgIGVV5Tw GZVGFddVFOYgee7SAu0KNFm7aIk= Received: by gemini.example.com (Postfix, from userid 504) id 2D5F521A0166; Tue, 26 Jan 2010 14:07:03 -0800 (PST) To: [email protected] Subject: Testing Message-Id: <[email protected] Date: Tue, 26 Jan 2010 14:07:03 -0800 (PST) From: [email protected] X-AOL-IP: 208.115.108.162 X-AOL-SCOLL-AUTHENTICATION: mail_rly_antispam_dkim-d227.1 ; domain : gemini.example.com DKIM : pass X-Mailer: Unknown (No Version) Content-Type: text/plain; charset="US-ASCII" Content-Transfer-Encoding: 7bit

    Read the article

  • Top Tweets SOA Partner Community – November 2011

    - by JuergenKress
    Send your tweets @soacommunity #soacommunity and follow us at http://twitter.com/soacommunity soacommunity SOA Community Dutch ACEs SOA Partner Community award celebration wp.me/p10C8u-i9 OracleBPM Gauging Maturity of your BPM Strategy – part 1/2, bit.ly/vJE9UZ MagicChatzi Dutch ACE’s and ACE Directors had a small party: achatzia.blogspot.com/2011/11/celebr… leonsmiers #Capgemini #Oracle #BPM Blog index bit.ly/tUYtvD #yam lucasjellema Blog post by my colleague Emiel on the AMIS blog: Timeouts in Oracle SOA Suite 11g – tinyurl.com/73amo3r biemond Solving __OAUX_GENXSD_.TOP.XSD with BPEL: When you use an external web service in combination with a BPEL servic… t.co/Gzzatzrr OracleBlogs Jumpstart Fusion Middleware projects with Oracle User Productivity Kit ow.ly/1fJMev cpurdy on Oracle Coherence data grid, its new RESTful APIs, and Oracle Service Bus (OSB): blogs.oracle.com/slc/entry/orac… Accenture Learn how Service-Oriented Architecture can help public service agencies solve legacy system issues. bit.ly/sTteM4 #SOA eelzinga Thanks for organising it Andreas! #soacommunity eelzinga Had a nice drink with the fellow Dutch Oracle ACE members for a little celebration of the SOA Community Partner Award. #soacommunity EmielP Wrote a blogpost about timeouts in the #Oracle #SOA Suite: bit.ly/uhUcrX OracleBlogs Processing Binary Data in SOA Suite 11g t.co/Tzd1xBsY OracleBlogs Finding the Value in SOA by Stephen Bennett t.co/9MMLJoLz OTNArchBeat SOA All the Time; Architects in AZ; Clearing Info Integration hurdles t.co/5viNj8ib OracleBlogs Demo: Business Transaction Management with SOA Management Pack ow.ly/1fFBv3 OTNArchBeat SOA All the Time; Architects in AZ; Clearing Info Integration hurdles t.co/Dnfzo0PN oracletechnet Wikis.oracle.com lives leonsmiers A new #capgemini #oracle #blog, Measuring the Human Task activity in Oracle BPM bit.ly/uPan08 #yam @CapgeminiOracle OTNArchBeat 3 SOA business cases, explained in a 2-minute elevator speech | @JoeMcKendrick t.co/aYGNkZup OTNArchBeat Gartner, Inc. places Oracle SOA Governance in Magic Quadrant for SOA Governance Technologies t.co/bSG5cuTr Jphjulstad Red carpet to Oracle BPM – evita.no evita.no/ikbViewer/soa-… Oracle #Oracle Named a Leader in #SOA Governance Magic Quadrant by Leading Analyst Firm t.co/prnyGu2U soacommunity What presentations & topics do you like to see at the next SOA & BPM & Webcenter Community Forum early 2012? #soacommunity soacommunity Oracle BPM Suite 11g Handbook Released wp.me/p10C8u-hU OTNArchBeat SOA Development Virtual Developer Day (On Demand) | @soacommunity bit.ly/sqhQmX OracleBlogs SOA Development Virtual Developer Day (On Demand) t.co/MDrdnx0h 9 Nov Favorite Undo Retweet Reply OracleBlogs Specialized Partners Only! New Service to Promote Your Events t.co/qTgyEpY4 biemond @stevendavelaar this is for you t.co/hInKCcfY it explains your sso problem soacommunity SOA Development Virtual Developer Day (on demand) t.co/flXPWk4R soacommunity IPT Swiss SOA Experts – thanks for the nice ink wp.me/p10C8u-i3 soacommunity Enjoy #wjax specially the presentations from our #ACE @t_winterberg @myfear @AdamBien pic.twitter.com/m8VcBSG3 OTNArchBeat Discounts on books, more, for Oracle Technology Network members bit.ly/vRxMfB OracleSOA Justify the ROI of SOA in 10 seconds…a pic is worth 1000 words bit.ly/roi_of_soa_img #oraclesoa #soa #oow11 orclateamsoa A-Team SOA Blog: Case Management in BPM 11g -  Mark Foster Oracle BPM 11g & Case Management I’ve seen… t.co/l5zb6pFr t_winterberg Die nächste SIG #SOA steht an: 7.12. in Hamburg. Neues Tooling und Erfahrungen rund um Oracle FMW, SOA, BPM… (cont) deck.ly/~YC57v OracleBlogs Continuous Integration for SOA/BPM ow.ly/1fsekI OracleBlogs BPM Suite 11g Handbook Released ow.ly/1frlzv lucasjellema Iterating over collection (array) in BPM (and dispatching jobs for entries in array): t.co/1SEhSvWv – subprocesses are the key. lucasjellema Lucas Jellema Useful tip from Mark Nelson: BPM API documentation (as well as Human Workflow Service) available: redstack.wordpress.com/2011/09/28/api… OTNArchBeat SOA, cloud: it’s the architecture that matters | Joe McKendrick zd.net/tNCiTF orclateamsoa: Building a job dispatcher in BPM -or- Iterating over collections in BPM ow.ly/1frbrz orclateamsoa Using the Database as a Policy Store for SOA 11g ow.ly/1frbrA OracleBPM Oracle launches Process Accelerators for BPM: t.co/XPEE61QL Jphjulstad Human-Centric BPM Selection Checklist t.co/3TZXZHLH OracleBlogs Fusion Middleware General Session at OOW 2011: Missed It? Read On… t.co/aU5JvM6K gschmutz Great! The product page of the OSB 11g Development Cookbook is now online: t.co/5Jfbe6Ng Looking forward to get it, u too? brhubart Oracle IT Architecture Essentials; Lightweight Composite Service Development with SCA and Spring; Cloud Migration ow.ly/7esNg eelzinga New blogpost : Oracle Service Bus, Generic fault handling, bit.ly/sGr4UL #osb #oracleservicebus For regular information on Oracle SOA Suite become a member in the SOA Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) Blog Twitter LinkedIn Mix Forum Technorati Tags: soacommunity,twitter,Oracle,SOA Community,Jürgen Kress,OPN

    Read the article

  • Unlock the Full Value of Oracle CRM On Demand

    - by ruth.donohue
    Register for this live Oracle CRM On Demand Virtual Community Session! Oracle CRM On Demand delivers the most complete On Demand CRM solution on the market. But how can you ensure you are getting maximum value from the many powerful features that Oracle CRM On Demand offers? Join our interactive Oracle CRM On Demand Virtual Community Session on Tuesday, January 11, 2011 from 10.00-11 a.m. PT / 1 p.m.-2 p.m. ET to get expert advice and discuss the best ways to unlock the full potential of Oracle CRM Demand with Mike Lairson, author of 'Oracle CRM On Demand Reporting'. Book Offer Send your Oracle CRM On Demand configuration ideas before the Webcast to [email protected] and you could win a free copy of 'Oracle CRM On Demand Reporting' by Mike Lairson. Learn more and register now!

    Read the article

  • Enable remote VNC from the commandline?

    - by Stefan Lasiewski
    I have one computer running Ubuntu 10.04, and is running Vino, the default VNC server. I have a second Windows box which is running a VNC client, but does not have any X11 capabilities. I am ssh'd into the Ubuntu host from the Windows host, but I forgot to enable VNC access on the Ubuntu host. On the Ubuntu host, is there a way for me to enable VNC connections from the Ubuntu commandline? Update: As @koanhead says below, there is no man page for vino (e.g. man -k vino and info vino return nothing), and vino --help doesn't show any help).

    Read the article

  • The first Oracle Solaris 11 book is now available

    - by user12608550
    The first Oracle Solaris 11 book is now available: Oracle Solaris 11 System Administration - The Complete Reference by Michael Jang, Harry Foxwell, Christine Tran, and Alan Formy-Duval The book covers the Oracle Solaris 11 11/11 release; although the next OS release will be available soon, the book covers major topics and features that are not expected to change significantly. The target audience is broad, and includes Solaris admins, Linux admins and developers, and even those somewhat unfamiliar with UNIX. The coauthors include practitioners and developers from outside of Oracle, emphasizing their field experience using Solaris 11. The book complements the extensive Oracle Solaris 11 Information Library, and covers the main system administration topics of installation, configuration, and management. More Oracle Solaris 11 info here

    Read the article

  • Silverlight Cream for January 01, 2011 -- #1020

    - by Dave Campbell
    In this short New Year's Day 2011 Issue, 3 Mikes: Mike Taulty, Mike Snow, and Mike Ormond. Above the Fold: Silverlight: "Native Extensions for Silverlight (NESL)?" Mike Taulty WP7: "Monitoring Memory Usage on Windows Phone 7" Mike Ormond From SilverlightCream.com: Native Extensions for Silverlight (NESL)? Mike Taulty has a really good write-up on Native Extensions for Silverlight... he describes what that project is about and gives guidance on best practices. Win7 Mobile: Uniquely Identifying a Device or User Mike Snow has a post up describing how to uniquely identify the phone or device your app is running on using the Microsoft.Phone.Info.DeviceExtendedProperties namespace Monitoring Memory Usage on Windows Phone 7 Mike Ormond has a post up showing how to turn on and make use of the framerate counters in WP7 Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • I made a 2D ENGINE for Android, looking for cooperation.

    - by Roger Travis
    My name is Robert, I am an Android programmer and wanted to show off my latest project - a 2d game engine. You can see it in action here - https://play.google.com/store/apps/details?id=engineDemo.com My engine's main advantage is its ease of use. To have your level up and running, you'll need only 3 lines of code. ABoxView aboxView = new ABoxView(this); setContentView(aboxView); aboxView.loadLevel("level/level02"); Level are created in a special level constructor and object physical properties are stored in a corresponding XML file. I am looking to cooperate with those, who might be interesting in using my engine in their games. You can email me at [email protected] or post here. Thanks, Robert

    Read the article

< Previous Page | 357 358 359 360 361 362 363 364 365 366 367 368  | Next Page >