Search Results

Search found 49998 results on 2000 pages for 'web config'.

Page 650/2000 | < Previous Page | 646 647 648 649 650 651 652 653 654 655 656 657  | Next Page >

  • jQuery plugin design pattern for using `this` in private methods?

    - by thebossman
    I'm creating jQuery plugins using the pattern from the Plugins Authoring page: (function($) { $.fn.myPlugin = function(settings) { var config = {'foo': 'bar'}; if (settings) $.extend(config, settings); this.each(function() { // element-specific code here }); return this; }; })(jQuery); My code calls for several private methods that manipulate this. I am calling these private methods using the apply(this, arguments) pattern. Is there a way of designing my plugin such that I don't have to call apply to pass this from method to method? My modified plugin code looks roughly like this: (function($) { $.fn.myPlugin = function(settings) { var config = {'foo': 'bar'}; if (settings) $.extend(config, settings); this.each(function() { method1.apply(this); }); return this; }; function method1() { // do stuff with $(this) method2.apply(this); } function method2() { // do stuff with $(this), etc... } })(jQuery);

    Read the article

  • Bash script to find a directory, list it's contents and sub-folders info

    - by lithiumion
    Hi I want to write a script that will: 1- locate folder "store" on a *nix filesystem 2- move into that folder 3- print list of contents with last modification date 4- calculate sub-folders size This folder's absolute path changes from server to server, but the folder name remains the same always. There is a config file that contains the correct path to that folder though, but it doesn't give absolute bath to it. Sample Config: Account ON DIR-Store /hdd1 Scheduled YES ?According to the config file the absolute path would be "/hdd1/backup/store/" I need the script to grep the "/hdd1" or anything beyond the word "Config-Store", add "/backup/store/" to it, move into folder "store", print list of it's contents, and calculate sub-folders size. Until now I manually edit the script on each server to reflect the path to the "store" folder. Here is a sample script: #!/bin/bash echo " " echo " " echo "Moving Into Directory" cd /hdd1/backup/store/ echo "Listing Directory Content" echo " " ls -alh echo "*******************************" sleep 2 echo " " echo "Calculating Backup Size" echo " " du -sh store/* echo "********** Done! **********" I know I could use grep cat /etc/store.conf | grep DIR-Store Just don't know how to get around selecting the path, adding the "/backup/store/" and moving ahead. Any help will be appreciated

    Read the article

  • What job title should be most suitable for my object in resume and what salary range should I expect

    - by user354177
    I was a classic asp developer in 2000. After a year of full-time employment, I left the field. I found a part-time position as an asp developer again in 2005 and taught myself vb.net. In 2007, I got the current full-time job as an Asp.net web developer. I taught myself C#, LING t0 SQL, Web Services, AJAX, and creating all kinds of reports with reporting services. One and half years ago, I sent myself to part-time graduate program in Database and Web Systems. I'll have two semesters to go and so far my GPA is 4.0/4.0. My job responsibility is to collect business requirements from other departments, design the database, write stored procedures, create aspx pages, and create reports. I love what I do and want to advance my career to the next level. What I enjoy most is to design the relational database. I would want to become an .Net Architect eventually. I got an interview. They were looking for asp.net web developer. But I was surprised and disappointed that position would only create aspx pages. I would not even have opportunity to write stored procedures, let alone design the database (those would be provided by another group). Furthermore, they asked me some detailed questions about web forms, some of which I did not know the answers. they might be disappointed as well. I am eager to learn and can apply what I learn to real projects right away. I believe no matter what specific skills I am lacking for a new position, I can catch up quickly. I am looking for $70k range job. The object in my resume is Experience C# Web Application Developer. Due to the experience from last interview, I wonder if the object is really what I want. Could somebody answer my questions? Thank you.

    Read the article

  • PHP & HTML Purifier Error: mysqli_num_rows() expects parameter 1 to be mysqli_result, boolean given

    - by TaG
    I'm trying to Integrate HTML Purifier http://htmlpurifier.org/ to filter my user submitted data but I get the following error below. And I was wondering how can I fix this problem? I get the following error. on line 22: mysqli_num_rows() expects parameter 1 to be mysqli_result, boolean given line 22 is. if (mysqli_num_rows($dbc) == 0) { Here is the php code. if (isset($_POST['submitted'])) { // Handle the form. require_once '../../htmlpurifier/library/HTMLPurifier.auto.php'; $config = HTMLPurifier_Config::createDefault(); $config->set('Core.Encoding', 'UTF-8'); // replace with your encoding $config->set('HTML.Doctype', 'XHTML 1.0 Strict'); // replace with your doctype $purifier = new HTMLPurifier($config); $mysqli = mysqli_connect("localhost", "root", "", "sitename"); $dbc = mysqli_query($mysqli,"SELECT users.*, profile.* FROM users INNER JOIN contact_info ON contact_info.user_id = users.user_id WHERE users.user_id=3"); $about_me = mysqli_real_escape_string($mysqli, $purifier->purify($_POST['about_me'])); $interests = mysqli_real_escape_string($mysqli, $purifier->purify($_POST['interests'])); if (mysqli_num_rows($dbc) == 0) { $mysqli = mysqli_connect("localhost", "root", "", "sitename"); $dbc = mysqli_query($mysqli,"INSERT INTO profile (user_id, about_me, interests) VALUES ('$user_id', '$about_me', '$interests')"); } if ($dbc == TRUE) { $dbc = mysqli_query($mysqli,"UPDATE profile SET about_me = '$about_me', interests = '$interests' WHERE user_id = '$user_id'"); echo '<p class="changes-saved">Your changes have been saved!</p>'; } if (!$dbc) { // There was an error...do something about it here... print mysqli_error($mysqli); return; } }

    Read the article

  • Having trouble using jQuery's .animate() to animate a div from left to right, right to left?

    - by Alex
    Hello, I seem to be having difficulties using jQuery .animate() to animate an absolutely positioned div from right to left on a button click, and left to right on another button click. I was wondering if you would be willing to help me understand what I'm doing wrong? Thanks. Below is my relevant CSS, HTML, and jQuery code. I can click the #moveLeft button and it wil indeed animate it to the left, but when I click the #moveRight button, nothing happens. Where am I going wrong? Thanks!! CSS #scorecardTwo { position:absolute; padding:5px; width: 300px; background-color:#E1E1E1; right:0px; top:0px; display:none; } HTML text text Left Right jQuery $("#scorecardTwo").fadeIn("slow"); $("#moveLeft").bind("click", function() { var config = { "left" : function() { return $(this).offset().left; }, "right" : function() { return $("body").innerWidth() - $K("#scorecardTwo").width(); } }; $("#scorecardTwo").css(config).animate({"left": "0px"}, "slow"); $(this).attr("disabled", "disabled"); $("#moveRight").attr("disabled", ""); }); $("#moveRight").bind("click", function() { var config = { "left" : function() { return $(this).offset().left; }, "right" : function() { return $("body").innerWidth() - $K("#scorecardTwo").width(); } }; $("#scorecardTwo").css(config).animate({"right" : "0px"}, "slow"); $(this).attr("disabled", "disabled"); $("#moveLeft").attr("disabled", ""); });

    Read the article

  • i don't know how to solve this error

    - by wide
    in local it works. when i load server, i got this error. Using themed css files requires a header control on the page. (e.g. <head runat="server" />). Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.InvalidOperationException: Using themed css files requires a header control on the page. (e.g. <head runat="server" />). Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [InvalidOperationException: Using themed css files requires a header control on the page. (e.g. <head runat="server" />).] System.Web.UI.PageTheme.SetStyleSheet() +2458406 System.Web.UI.Page.OnInit(EventArgs e) +8699420 System.Web.UI.Control.InitRecursive(Control namingContainer) +333 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +378

    Read the article

  • Update paths of already-created Paperclip attachments

    - by Horace Loeb
    I used to have this buggy Paperclip config: class Photo < ActiveRecord::Base has_attached_file :image, :storage => :s3, :styles => { :medium => "600x600>", :small => "320x320>", :thumb => "100x100#" }, :s3_credentials => "#{RAILS_ROOT}/config/s3.yml", :path => "/:style/:filename" end This is buggy because two images cannot have the same size and filename. To fix this, I changed the config to: class Photo < ActiveRecord::Base has_attached_file :image, :storage => :s3, :styles => { :medium => "600x600>", :small => "320x320>", :thumb => "100x100#" }, :s3_credentials => "#{RAILS_ROOT}/config/s3.yml", :path => "/:style/:id_:filename" end Unfortunately this breaks all URLs to attachments I've already created. How can I update those file paths or otherwise get the URLs to work?

    Read the article

  • Error on 64 Bit Install of IIS &ndash; LoadLibraryEx failed on aspnet_filter.dll

    - by Rick Strahl
    I’ve been having a few problems with my Windows 7 install and trying to get IIS applications to run properly in 64 bit. After installing IIS and creating virtual directories for several of my applications and firing them up I was left with the following error message from IIS: Calling LoadLibraryEx on ISAPI filter “c:\windows\Microsoft.NET\Framework\v4.0.30319\aspnet_filter.dll” failed This is on Windows 7 64 bit and running on an ASP.NET 4.0 Application configured for running 64 bit (32 bit disabled). It’s also on what is essentially a brand new installation of IIS and Windows 7. So it failed right out of the box. The problem here is that IIS is trying to loading this ISAPI filter from the 32 bit folder – it should be loading from Framework64 folder note the Framework folder. The aspnet_filter.dll component is a small Win32 ISAPI filter used to back up the cookieless session state for ASP.NET on IIS 7 applications. It’s not terribly important because of this focus, but it’s a default loaded component. After a lot of fiddling I ended up with two solutions (with the help and support of some Twitter folks): Switch IIS to run in 32 bit mode Fix the filter listing in ApplicationHost.config Switching IIS to allow 32 Bit Code This is a quick fix for the problem above which enables 32 bit code in the Application Pool. The problem above is that IIS is trying to load a 32 bit ISAPI filter and enabling 32 bit code gets you around this problem. To configure your Application Pool, open the Application Pool in IIS Manager bring up Advanced Options and Enable 32 Bit Applications: And voila the error message above goes away. Fix Filters Enabling 32 bit code is a quick fix solution to this problem, but not an ideal one. If you’re running a pure .NET application that doesn’t need to do COM or pInvoke Interop with 32 bit apps there’s usually no need for enabling 32 bit code in an Application Pool as you can run in native 64 bit code. So trying to get 64 bit working natively is a pretty key feature in my opinion :-) So what’s the problem – why is IIS trying to load a 32 bit DLL in a 64 bit install, especially if the application pool is configured to not allow 32 bit code at all? The problem lies in the server configuration and the fact that 32 bit and 64 bit configuration settings exist side by side in IIS. If I open my Default Web Site (or any other root Web Site) and go to the ISAPI filter list here’s what I see: Notice that there are 3 entries for ASP.NET 4.0 in this list. Only two of them however are specifically scoped to the specifically to 32 bit or 64 bit. As you can see the 64 bit filter correctly points at the Framework64 folder to load the dll, while both the 32 bit and the ‘generic’ entry point at the plain Framework 32 bit folder. Aha! Hence lies our problem. You can edit ApplicationHost.config manually, but I ran into the nasty issue of not being able to easily edit that file with the 32 bit editor (who ever thought that was a good idea???? WTF). You have to open ApplicationHost.Config in a 64 bit native text editor – which Visual Studio is not. Or my favorite editor: EditPad Pro. Since I don’t have a native 64 bit editor handy Notepad was my only choice. Or as an alternative you can use the IIS 7.5 Configuration Editor which lets you interactively browse and edit most ApplicationHost settings. You can drill into the configuration hierarchy visually to find your keys and edit attributes and sub values in property editor type interface. I had no idea this tool existed prior to today and it’s pretty cool as it gives you some visual clues to options available – especially in absence of an Intellisense scheme you’d get in Visual Studio (which doesn’t work). To use the Configuration Editor go the Web Site root and use the Configuration Editor option in the Management Group. Drill into System.webServer/isapiFilters and then click on the Collection’s … button on the right. You should now see a display like this: which shows all the same attributes you’d see in ApplicationHost.config (cool!). These entries correspond to these raw ApplicationHost.config entries: <filter name="ASP.Net_4.0" path="C:\Windows\Microsoft.NET\Framework\v4.0.30319\aspnet_filter.dll" enableCache="true" preCondition="runtimeVersionv4.0" /> <filter name="ASP.Net_4.0_64bit" path="C:\Windows\Microsoft.NET\Framework64\v4.0.30319\aspnet_filter.dll" enableCache="true" preCondition="runtimeVersionv4.0,bitness64" /> <filter name="ASP.Net_4.0_32bit" path="C:\Windows\Microsoft.NET\Framework\v4.0.30319\aspnet_filter.dll" enableCache="true" preCondition="runtimeVersionv4.0,bitness32" /> The key attribute we’re concerned with here is the preCondition and the bitness subvalue. Notice that the ‘generic’ version – which comes first in the filter list – has no bitness assigned to it, so it defaults to 32 bit and the 32 bit dll path. And this is where our problem comes from. The simple solution to fix the startup problem is to remove the generic entry from this list here or in the filters list shown earlier and leave only the bitness specific versions active. The preCondition attribute acts as a filter and as you can see here it filters the list by runtime version and bitness value. This is something to keep an eye out in general – if a bitness values are missing it’s easy to run into conflicts like this with any settings that are global and especially those that load modules and handlers and other executable code. On 64 bit systems it’s a good idea to explicitly set the bitness of all entries or remove the non-specific versions and add bit specific entries. So how did this get misconfigured? I installed IIS before everything else was installed on this machine and then went ahead and installed Visual Studio. I suspect the Visual Studio install munged this up as I never saw a similar problem on my live server where everything just worked right out of the box. In searching about this problem a lot of solutions pointed at using aspnet_regiis –r from the Framework64 directory, but that did not fix this extra entry in the filters list – it adds the required 32 bit and 64 bit entries, but it doesn’t remove the errand un-bitness set entry. Hopefully this post will help out anybody who runs into a similar situation without having to trouble shoot all the way down into the configuration settings and noticing the bitness settings. It’s a good lesson learned for me – this is my first desktop install of a 64 bit OS and things like this are what I was reluctant to find. Now that I ran into this I have a good idea what to look for with 32/64 bit misconfigurations in IIS at least.© Rick Strahl, West Wind Technologies, 2005-2011Posted in IIS7   ASP.NET  

    Read the article

  • Quartz.Net Windows Service Configure Logging

    - by Tarun Arora
    In this blog post I’ll be covering, Logging for Quartz.Net Windows Service 01 – Why doesn’t Quartz.Net Windows Service log by default 02 – Configuring Quartz.Net windows service for logging to eventlog, file, console, etc 03 – Results: Logging in action If you are new to Quartz.Net I would recommend going through, A brief Introduction to Quartz.net Walkthrough of Installing & Testing Quartz.Net as a Windows Service Writing & Scheduling your First HelloWorld job with Quartz.Net   01 – Why doesn’t Quartz.Net Windows Service log by default If you are trying to figure out why… The Quartz.Net windows service isn’t logging The Quartz.Net windows service isn’t writing anything to the event log The Quartz.Net windows service isn’t writing anything to a file How do I configure Quartz.Net windows service to use log4Net How do I change the level of logging for Quartz.Net Look no further, This blog post should help you answer these questions. Quartz.NET uses the Common.Logging framework for all of its logging needs. If you navigate to the directory where Quartz.Net Windows Service is installed (I have the service installed in C:\Program Files (x86)\Quartz.net, you can find out the location by looking at the properties of the service) and open ‘Quartz.Server.exe.config’ you’ll see that the Quartz.Net is already set up for logging to ConsoleAppender and EventLogAppender, but only ‘ConsoleAppender’ is set up as active. So, unless you have the console associated to the Quartz.Net service you won’t be able to see any logging. <log4net> <appender name="ConsoleAppender" type="log4net.Appender.ConsoleAppender"> <layout type="log4net.Layout.PatternLayout"> <conversionPattern value="%d [%t] %-5p %l - %m%n" /> </layout> </appender> <appender name="EventLogAppender" type="log4net.Appender.EventLogAppender"> <layout type="log4net.Layout.PatternLayout"> <conversionPattern value="%d [%t] %-5p %l - %m%n" /> </layout> </appender> <root> <level value="INFO" /> <appender-ref ref="ConsoleAppender" /> <!-- uncomment to enable event log appending --> <!-- <appender-ref ref="EventLogAppender" /> --> </root> </log4net> Problem: In the configuration above Quartz.Net Windows Service only has ConsoleAppender active. So, no logging will be done to EventLog. More over the RollingFileAppender isn’t setup at all. So, Quartz.Net will not log to an application trace log file. 02 – Configuring Quartz.Net windows service for logging to eventlog, file, console, etc Let’s change this behaviour by changing the config file… In the below config file, I have added the RollingFileAppender. This will configure Quartz.Net service to write to a log file. (<appender name="GeneralLog" type="log4net.Appender.RollingFileAppender">) I have specified the location for the log file (<arg key="configFile" value="Trace/application.log.txt"/>) I have enabled the EventLogAppender and RollingFileAppender to be written to by Quartz. Net windows service Changed the default level of logging from ‘Info’ to ‘All’. This means all activity performed by Quartz.Net Windows service will be logged. You might want to tune this back to ‘Debug’ or ‘Info’ later as logging ‘All’ will produce too much data to the logs. (<level value="ALL"/>) Since I have changed the logging level to ‘All’, I have added applicationSetting to remove logging log4Net internal debugging. (<add key="log4net.Internal.Debug" value="false"/>) <?xml version="1.0" encoding="utf-8" ?> <configuration> <configSections> <section name="quartz" type="System.Configuration.NameValueSectionHandler, System, Version=1.0.5000.0,Culture=neutral, PublicKeyToken=b77a5c561934e089" /> <section name="log4net" type="log4net.Config.Log4NetConfigurationSectionHandler, log4net" /> <sectionGroup name="common"> <section name="logging" type="Common.Logging.ConfigurationSectionHandler, Common.Logging" /> </sectionGroup> </configSections> <common> <logging> <factoryAdapter type="Common.Logging.Log4Net.Log4NetLoggerFactoryAdapter, Common.Logging.Log4net"> <arg key="configType" value="INLINE" /> <arg key="configFile" value="Trace/application.log.txt"/> <arg key="level" value="ALL" /> </factoryAdapter> </logging> </common> <appSettings> <add key="log4net.Internal.Debug" value="false"/> </appSettings> <log4net> <appender name="ConsoleAppender" type="log4net.Appender.ConsoleAppender"> <layout type="log4net.Layout.PatternLayout"> <conversionPattern value="%d [%t] %-5p %l - %m%n" /> </layout> </appender> <appender name="EventLogAppender" type="log4net.Appender.EventLogAppender"> <layout type="log4net.Layout.PatternLayout"> <conversionPattern value="%d [%t] %-5p %l - %m%n" /> </layout> </appender> <appender name="GeneralLog" type="log4net.Appender.RollingFileAppender"> <file value="Trace/application.log.txt"/> <appendToFile value="true"/> <maximumFileSize value="1024KB"/> <rollingStyle value="Size"/> <layout type="log4net.Layout.PatternLayout"> <conversionPattern value="%d{HH:mm:ss} [%t] %-5p %c - %m%n"/> </layout> </appender> <root> <level value="ALL" /> <appender-ref ref="ConsoleAppender" /> <appender-ref ref="EventLogAppender" /> <appender-ref ref="GeneralLog"/> </root> </log4net> </configuration>   Note – Please ensure you restart the Quartz.Net Windows service for the config changes to be picked up by the service   03 – Results: Logging in action Once you start the Quartz.Net Windows Service up, the logging should be initiated to write all activities in the Console, EventLog and File… See screen shots below… Figure – Quartz.Net Windows Service logging all activity to the event log Figure – Quartz.Net Windows Service logging all activity to the application log file Where is the output from log4Net ConsoleAppender? As a default behaviour, the console isn't available in windows services, web services, windows forms. The output will simply be dismissed. Unless you are running the process interactively. Which you can do by firing up Quartz.Server.exe –i to see the output   This was fourth in the series of posts on enterprise scheduling using Quartz.net, in the next post I’ll be covering troubleshooting why a scheduled task hasn’t fired on Quartz.net windows service. All Quartz.Net specific blog posts can listed here. Thank you for taking the time out and reading this blog post. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Stay tuned!

    Read the article

  • Problem in starting Rails server [on hold]

    - by Ahsan Rony
    when I start rails s I get the following error: C:\Sites\ticketee>rails s => Booting WEBrick => Rails 4.1.4 application starting in development on http:/0.0.0.0:3000 => Run `rails server -h` for more startup options => Notice: server is listening on all interfaces (0.0.0.0). Consider using 127.0 .0.1 (--binding option) => Ctrl-C to shutdown server Exiting C:/RailsInstaller/Ruby2.0.0/lib/ruby/gems/2.0.0/gems/activesupport-4.1.4/lib/act ive_support/dependencies.rb:247:in `require': cannot load such file -- treetop/r untime (LoadError) from C:/RailsInstaller/Ruby2.0.0/lib/ruby/gems/2.0.0/gems/activesupport- 4.1.4/lib/active_support/dependencies.rb:247:in `block in require' from C:/RailsInstaller/Ruby2.0.0/lib/ruby/gems/2.0.0/gems/activesupport- 4.1.4/lib/active_support/dependencies.rb:232:in `load_dependency' from C:/RailsInstaller/Ruby2.0.0/lib/ruby/gems/2.0.0/gems/activesupport- 4.1.4/lib/active_support/dependencies.rb:247:in `require' from C:/RailsInstaller/Ruby2.0.0/lib/ruby/gems/2.0.0/gems/mail-2.5.4/lib /load_parsers.rb:7:in `<module:Mail>' from C:/RailsInstaller/Ruby2.0.0/lib/ruby/gems/2.0.0/gems/mail-2.5.4/lib /load_parsers.rb:6:in `<top (required)>' from C:/RailsInstaller/Ruby2.0.0/lib/ruby/gems/2.0.0/gems/activesupport- 4.1.4/lib/active_support/dependencies.rb:247:in `require' from C:/RailsInstaller/Ruby2.0.0/lib/ruby/gems/2.0.0/gems/activesupport- 4.1.4/lib/active_support/dependencies.rb:247:in `block in require' from C:/RailsInstaller/Ruby2.0.0/lib/ruby/gems/2.0.0/gems/activesupport- 4.1.4/lib/active_support/dependencies.rb:232:in `load_dependency' from C:/RailsInstaller/Ruby2.0.0/lib/ruby/gems/2.0.0/gems/activesupport- 4.1.4/lib/active_support/dependencies.rb:247:in `require' from C:/RailsInstaller/Ruby2.0.0/lib/ruby/gems/2.0.0/gems/mail-2.5.4/lib /mail.rb:79:in `<module:Mail>' from C:/RailsInstaller/Ruby2.0.0/lib/ruby/gems/2.0.0/gems/mail-2.5.4/lib /mail.rb:2:in `<top (required)>' from C:/RailsInstaller/Ruby2.0.0/lib/ruby/gems/2.0.0/gems/activesupport- 4.1.4/lib/active_support/dependencies.rb:247:in `require' from C:/RailsInstaller/Ruby2.0.0/lib/ruby/gems/2.0.0/gems/activesupport- 4.1.4/lib/active_support/dependencies.rb:247:in `block in require' from C:/RailsInstaller/Ruby2.0.0/lib/ruby/gems/2.0.0/gems/activesupport- 4.1.4/lib/active_support/dependencies.rb:232:in `load_dependency' from C:/RailsInstaller/Ruby2.0.0/lib/ruby/gems/2.0.0/gems/activesupport- 4.1.4/lib/active_support/dependencies.rb:247:in `require' from C:/RailsInstaller/Ruby2.0.0/lib/ruby/gems/2.0.0/gems/actionmailer-4 .1.4/lib/action_mailer/base.rb:1:in `<top (required)>' from C:/RailsInstaller/Ruby2.0.0/lib/ruby/gems/2.0.0/gems/actionmailer-4 .1.4/lib/action_mailer/railtie.rb:49:in `block in <class:Railtie>' from C:/RailsInstaller/Ruby2.0.0/lib/ruby/gems/2.0.0/gems/activesupport- 4.1.4/lib/active_support/lazy_load_hooks.rb:36:in `call' from C:/RailsInstaller/Ruby2.0.0/lib/ruby/gems/2.0.0/gems/activesupport- 4.1.4/lib/active_support/lazy_load_hooks.rb:36:in `execute_hook' from C:/RailsInstaller/Ruby2.0.0/lib/ruby/gems/2.0.0/gems/activesupport- 4.1.4/lib/active_support/lazy_load_hooks.rb:45:in `block in run_load_hooks' from C:/RailsInstaller/Ruby2.0.0/lib/ruby/gems/2.0.0/gems/activesupport- 4.1.4/lib/active_support/lazy_load_hooks.rb:44:in `each' from C:/RailsInstaller/Ruby2.0.0/lib/ruby/gems/2.0.0/gems/activesupport- 4.1.4/lib/active_support/lazy_load_hooks.rb:44:in `run_load_hooks' from C:/RailsInstaller/Ruby2.0.0/lib/ruby/gems/2.0.0/gems/railties-4.1.4 /lib/rails/application/finisher.rb:64:in `block in <module:Finisher>' from C:/RailsInstaller/Ruby2.0.0/lib/ruby/gems/2.0.0/gems/railties-4.1.4 /lib/rails/initializable.rb:30:in `instance_exec' from C:/RailsInstaller/Ruby2.0.0/lib/ruby/gems/2.0.0/gems/railties-4.1.4 /lib/rails/initializable.rb:30:in `run' from C:/RailsInstaller/Ruby2.0.0/lib/ruby/gems/2.0.0/gems/railties-4.1.4 /lib/rails/initializable.rb:55:in `block in run_initializers' from C:/RailsInstaller/Ruby2.0.0/lib/ruby/2.0.0/tsort.rb:150:in `block i n tsort_each' from C:/RailsInstaller/Ruby2.0.0/lib/ruby/2.0.0/tsort.rb:183:in `block ( 2 levels) in each_strongly_connected_component' from C:/RailsInstaller/Ruby2.0.0/lib/ruby/2.0.0/tsort.rb:219:in `each_st rongly_connected_component_from' from C:/RailsInstaller/Ruby2.0.0/lib/ruby/2.0.0/tsort.rb:182:in `block i n each_strongly_connected_component' from C:/RailsInstaller/Ruby2.0.0/lib/ruby/2.0.0/tsort.rb:180:in `each' from C:/RailsInstaller/Ruby2.0.0/lib/ruby/2.0.0/tsort.rb:180:in `each_st rongly_connected_component' from C:/RailsInstaller/Ruby2.0.0/lib/ruby/2.0.0/tsort.rb:148:in `tsort_e ach' from C:/RailsInstaller/Ruby2.0.0/lib/ruby/gems/2.0.0/gems/railties-4.1.4 /lib/rails/initializable.rb:54:in `run_initializers' from C:/RailsInstaller/Ruby2.0.0/lib/ruby/gems/2.0.0/gems/railties-4.1.4 /lib/rails/application.rb:300:in `initialize!' from C:/Sites/ticketee/config/environment.rb:5:in `<top (required)>' from C:/Sites/ticketee/config.ru:3:in `require' from C:/Sites/ticketee/config.ru:3:in `block in <main>' from C:/RailsInstaller/Ruby2.0.0/lib/ruby/gems/2.0.0/gems/rack-1.5.2/lib /rack/builder.rb:55:in `instance_eval' from C:/RailsInstaller/Ruby2.0.0/lib/ruby/gems/2.0.0/gems/rack-1.5.2/lib /rack/builder.rb:55:in `initialize' from C:/Sites/ticketee/config.ru:in `new' from C:/Sites/ticketee/config.ru:in `<main>' from C:/RailsInstaller/Ruby2.0.0/lib/ruby/gems/2.0.0/gems/rack-1.5.2/lib /rack/builder.rb:49:in `eval' from C:/RailsInstaller/Ruby2.0.0/lib/ruby/gems/2.0.0/gems/rack-1.5.2/lib /rack/builder.rb:49:in `new_from_string' from C:/RailsInstaller/Ruby2.0.0/lib/ruby/gems/2.0.0/gems/rack-1.5.2/lib /rack/builder.rb:40:in `parse_file' from C:/RailsInstaller/Ruby2.0.0/lib/ruby/gems/2.0.0/gems/rack-1.5.2/lib /rack/server.rb:277:in `build_app_and_options_from_config' from C:/RailsInstaller/Ruby2.0.0/lib/ruby/gems/2.0.0/gems/rack-1.5.2/lib /rack/server.rb:199:in `app' from C:/RailsInstaller/Ruby2.0.0/lib/ruby/gems/2.0.0/gems/railties-4.1.4 /lib/rails/commands/server.rb:50:in `app' from C:/RailsInstaller/Ruby2.0.0/lib/ruby/gems/2.0.0/gems/rack-1.5.2/lib /rack/server.rb:314:in `wrapped_app' from C:/RailsInstaller/Ruby2.0.0/lib/ruby/gems/2.0.0/gems/railties-4.1.4 /lib/rails/commands/server.rb:130:in `log_to_stdout' from C:/RailsInstaller/Ruby2.0.0/lib/ruby/gems/2.0.0/gems/railties-4.1.4 /lib/rails/commands/server.rb:67:in `start' from C:/RailsInstaller/Ruby2.0.0/lib/ruby/gems/2.0.0/gems/railties-4.1.4 /lib/rails/commands/commands_tasks.rb:81:in `block in server' from C:/RailsInstaller/Ruby2.0.0/lib/ruby/gems/2.0.0/gems/railties-4.1.4 /lib/rails/commands/commands_tasks.rb:76:in `tap' from C:/RailsInstaller/Ruby2.0.0/lib/ruby/gems/2.0.0/gems/railties-4.1.4 /lib/rails/commands/commands_tasks.rb:76:in `server' from C:/RailsInstaller/Ruby2.0.0/lib/ruby/gems/2.0.0/gems/railties-4.1.4 /lib/rails/commands/commands_tasks.rb:40:in `run_command!' from C:/RailsInstaller/Ruby2.0.0/lib/ruby/gems/2.0.0/gems/railties-4.1.4 /lib/rails/commands.rb:17:in `<top (required)>' from bin/rails:4:in `require' from bin/rails:4:in `<main>' C:\Sites\ticketee> it exit automatically exits though I don't press Cntr+C Can anyone help me to fix this problem

    Read the article

  • Vodacom Call Center Management on the NetBeans Platform

    - by Geertjan
    If you live in South Africa, you know about Vodacom. Vodacom is one of the dominant mobile communication companies in South Africa, and beyond, providing voice, messaging, data, and similar mobile services. Inside Vodacom there's an application named Helios, which is a call centre application that had its inception in 2009 and consists of two parts. Firstly, a web-based front-end that allows a call centre agent to service subscribers using a Google-like search on a knowledge base structured as a collection of FAQs. The web-based front-end uses plain-old HTML + CSS + a good helping of JQuery and JQueryUI. This is delivered via JSR-168 portlets running on a cluster of IBM Portal 6 servers. In turn, the portlets communicate via RMI with several back-end EJB's containing the business logic. These EJB's are deployed on a cluster of Weblogic Application Servers, version 10.3.6. The second part is a NetBeans Platform application used for maintaining and constructing the knowledge base, i.e., the back-end of the web-based front-end. Helios is also used for a number of other maintenance functions, such as access permissions, user maintenance, and news bulletins. Below, in the web-based front-end, call centre agents can enter search terms and are presented with a number of FAQs from the knowledge base. Upon selecting a FAQ article, the agent is presented with the article text, the process to guide the subscriber, system checks that display information specific to the subscriber, and links to related applications and articles: Below, you can see that applications are searchable and can be accessed using the same web-based front-end as shown above. And, as can be seen below, knowledge base FAQs are maintained using the Helios Maintenance Application, which is the Vodacom application built on the NetBeans Platform: Several thousand call centre agent user accounts are administered using the Helios Maintenance Application. Below the main FAQ page is shown, together with the About dialog: Vodacom is happy with the back-end NetBeans Platform application. However, the front-end stack runs on quite old technology. Ideally Vodacom would like to migrate the portlets to Oracle Weblogic Portal or Oracle WebCenter, but this hasn't been accomplished yet. Migrating makes sense as the rest of the application server environment consists entirely of Oracle products.

    Read the article

  • Next Phase of ECM 11g Now Available - New UCM & URM 11g, & Updated I/PM & IRM 11g

    - by michelle.huff
    We're excited to announce that the Oracle Enterprise Content Management Suite 11g is now available! Today, Oracle announced ECM Suite 11g, a part of Fusion Middleware 11gR1 Patchset 2 release, which builds upon the Imaging and Process Management (I/PM) and Information Rights Management (IRM) 11g release earlier this year. Universal Content Management (UCM) and Universal Records Management (URM) 11g are now available with many new features and enhancements. All ECM products are localized into 27 languages, use a single repository, a single installer, centralized administration, and all run on the same Fusion Middleware tech stack. Oracle ECM Suite 11g, is better integrated to fit the way you work, with extreme performance and extreme scalability. Universal Content Management One click Web content management - brings Web content management authoring, design and presentation capabilities directly into how organizations design sites, portals, and custom Web applications. Simply take in the right amount of WCM that meets your needs - all without having to rewrite the application or port it over to a new technology stack or framework. Greater business user empowerment - with next generation desktop integrations and "smart productivity folders", new Web site "design mode" for business users, and enhanced rich media support enabling users to better work with photography, graphics, videos & podcasts created today as well as contribute content within Flash files directly from the Web. Advanced manageability with extreme performance & scalability - centralized system monitoring, installation, logging, performance metrics & diagnostics, with new built in "fast check-in" features, redesigned component management interface - all running on Fusion Middleware infrastructure. Universal Records Management Enhanced user experience: Oracle URM 11g makes records management easier for both business users and records administrators. Simplifications in the end user experience allow the creation of bookmarks into often-used part of the file plan, easy copying of categories and dispositions, and integrated folder and records search. The records management dashboard provides a consolidated view into records administrator tasks and system performance. DoD 5015.02 v3: Oracle URM is fully certified against all part of the US Department of Defense records management standard - baseline, classified, and Freedom of Information and Privacy Act. This enables Federal, state, & local governments & public agencies, as well as private companies, to maintain regulated compliance. Expanded functionality through Oracle integrations: Oracle URM 11g allows for an expanded set of functionality through integration capabilities with other Oracle products. This includes configurable records definition capabilities directly within a UCM instance. An out of the box integration with Oracle BI Publisher provides easily configured and robust reporting. Additionally, 11g offers an out of the box Oracle Secure Enterprise Search integration enabling real time full text discovery across disparate systems in an organization. Read the Press Release Watch the 3 Minute ECM 11g Video Get Up to Speed with the What's New in ECM Suite Datasheet Learn More on OTN with new tutorials, downloads and whitepapers

    Read the article

  • Vodacom Call Center Management on the NetBeans Platform

    - by Geertjan
    If you live in South Africa, you know about Vodacom. Vodacom is one of the dominant mobile communication companies in South Africa, and beyond, providing voice, messaging, data, and similar mobile services. Inside Vodacom there's an application named Helios, which is a call centre application that had its inception in 2009 and consists of two parts. Firstly, a web-based front-end that allows a call centre agent to service subscribers using a Google-like search on a knowledge base structured as a collection of FAQs. The web-based front-end uses plain-old HTML + CSS + a good helping of JQuery and JQueryUI. This is delivered via JSR-168 portlets running on a cluster of IBM Portal 6 servers. In turn, the portlets communicate via RMI with several back-end EJB's containing the business logic. These EJB's are deployed on a cluster of Weblogic Application Servers, version 10.3.6. The second part is a NetBeans Platform application used for maintaining and constructing the knowledge base, i.e., the back-end of the web-based front-end. Helios is also used for a number of other maintenance functions, such as access permissions, user maintenance, and news bulletins. Below, in the web-based front-end, call centre agents can enter search terms and are presented with a number of FAQs from the knowledge base. Upon selecting a FAQ article, the agent is presented with the article text, the process to guide the subscriber, system checks that display information specific to the subscriber, and links to related applications and articles: Below, you can see that applications are searchable and can be accessed using the same web-based front-end as shown above. And, as can be seen below, knowledge base FAQs are maintained using the Helios Maintenance Application, which is the Vodacom application built on the NetBeans Platform: Several thousand call centre agent user accounts are administered using the Helios Maintenance Application. Below the main FAQ page is shown, together with the About dialog: Vodacom is happy with the back-end NetBeans Platform application. However, the front-end stack runs on quite old technology. Ideally Vodacom would like to migrate the portlets to Oracle Weblogic Portal or Oracle WebCenter, but this hasn't been accomplished yet. Migrating makes sense as the rest of the application server environment consists entirely of Oracle products.

    Read the article

  • Dependency injection with n-tier Entity Framework solution

    - by Matthew
    I am currently designing an n-tier solution which is using Entity Framework 5 (.net 4) as its data access strategy, but am concerned about how to incorporate dependency injection to make it testable / flexible. My current solution layout is as follows (my solution is called Alcatraz): Alcatraz.WebUI: An asp.net webform project, the front end user interface, references projects Alcatraz.Business and Alcatraz.Data.Models. Alcatraz.Business: A class library project, contains the business logic, references projects Alcatraz.Data.Access, Alcatraz.Data.Models Alcatraz.Data.Access: A class library project, houses AlcatrazModel.edmx and AlcatrazEntities DbContext, references projects Alcatraz.Data.Models. Alcatraz.Data.Models: A class library project, contains POCOs for the Alcatraz model, no references. My vision for how this solution would work is the web-ui would instantiate a repository within the business library, this repository would have a dependency (through the constructor) of a connection string (not an AlcatrazEntities instance). The web-ui would know the database connection strings, but not that it was an entity framework connection string. In the Business project: public class InmateRepository : IInmateRepository { private string _connectionString; public InmateRepository(string connectionString) { if (connectionString == null) { throw new ArgumentNullException("connectionString"); } EntityConnectionStringBuilder connectionBuilder = new EntityConnectionStringBuilder(); connectionBuilder.Metadata = "res://*/AlcatrazModel.csdl|res://*/AlcatrazModel.ssdl|res://*/AlcatrazModel.msl"; connectionBuilder.Provider = "System.Data.SqlClient"; connectionBuilder.ProviderConnectionString = connectionString; _connectionString = connectionBuilder.ToString(); } public IQueryable<Inmate> GetAllInmates() { AlcatrazEntities ents = new AlcatrazEntities(_connectionString); return ents.Inmates; } } In the Web UI: IInmateRepository inmateRepo = new InmateRepository(@"data source=MATTHEW-PC\SQLEXPRESS;initial catalog=Alcatraz;integrated security=True;"); List<Inmate> deathRowInmates = inmateRepo.GetAllInmates().Where(i => i.OnDeathRow).ToList(); I have a few related questions about this design. 1) Does this design even make sense in terms of Entity Frameworks capabilities? I heard that Entity framework uses the Unit-of-work pattern already, am I just adding another layer of abstract unnecessarily? 2) I don't want my web-ui to directly communicate with Entity Framework (or even reference it for that matter), I want all database access to go through the business layer as in the future I will have multiple projects using the same business layer (web service, windows application, etc.) and I want to have it easy to maintain / update by having the business logic in one central area. Is this an appropriate way to achieve this? 3) Should the Business layer even contain repositories, or should that be contained within the Access layer? If where they are is alright, is passing a connection string a good dependency to assume? Thanks for taking the time to read!

    Read the article

  • Show raw Text Code from a URL with CodePaste.NET

    - by Rick Strahl
    I introduced CodePaste.NET more than 2 years ago. In case you haven't checked it out it's a code-sharing site where you can post some code, assign a title and syntax scheme to it and then share it with others via a short URL. The idea is super simple and it's not the first time this has been done, but it's focused on Microsoft languages and caters to that crowd. Show your own code from the Web There's another feature that I tweeted about recently that's been there for some time, but is not used very much: CodePaste.NET has the ability to show raw text based code from a URL on the Web in syntax colored format for any of the formats provided. I use this all the time with code links to my Subversion repository which only displays code as plain text. Using CodePaste.NET allows me to show syntax colored versions of the same code. For example I can go from this URL: http://www.west-wind.com:8080/svn/WestwindWebToolkit/trunk/Westwind.Utilities/SupportClasses/PropertyBag.cs To a nicely colored source code view at this Url: http://codepaste.net/ShowUrl?url=http%3A%2F%2Fwww.west-wind.com%3A8080%2Fsvn%2FWestwindWebToolkit%2Ftrunk%2FWestwind.Utilities%2FSupportClasses%2FPropertyBag.cs&Language=C%23 which looks like this:   Use the Form or access URLs directly To get there navigate to the Web Code icon on the CodePaste.NET site and paste your original URL and select a language to display: The form creates a link shown above which has two query string parameters: url - The URL for the raw text on the Web language -  The code language used for syntax highlighting Note that parameters must be URL encoded to work especially the # in C# because otherwise the # will be interpreted by the browser as a hash tag to jump to in the target URL. The URL must be Web accessible so that CodePaste can download it and then apply the syntax coloring. It doesn't work with localhost urls for example. The code returned must be returned in plain text - HTML based text doesn't work. Hope some of you find this a useful feature. Enjoy…© Rick Strahl, West Wind Technologies, 2005-2011Posted in .NET   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Setup and Use SpecFlow BDD with DevExpress XAF

    - by Patrick Liekhus
    Let’s get started with using the SpecFlow BDD syntax for writing tests with the DevExpress XAF EasyTest scripting syntax.  In order for this to work you will need to download and install the prerequisites listed below.  Once they are installed follow the steps outlined below and enjoy. Prerequisites Install the following items: DevExpress eXpress Application Framework (XAF) found here SpecFlow found here Liekhus BDD/XAF Testing library found here Assumptions I am going to assume at this point that you have created your XAF application and have your Module, Win.Module and Win ready for usage.  You should have also set any attributes and/or settings as you see fit. Setup So where to start. Create a new testing project within your solution. I typically call this with a similar naming convention as used by XAF, my project name .FunctionalTests (i.e. AlbumManager.FunctionalTests). Add the following references to your project.  It should look like the reference list below. DevExpress.Data.v11.x DevExpress.Persistent.Base.v11.x DevExpress.Persistent.BaseImpl.v11.x DevExpress.Xpo.v11.2 Liekhus.Testing.BDD.Core Liekhus.Testing.BDD.DevExpress TechTalk.SpecFlow TestExecutor.v11.x (found in %Program Files%\DevExpress 2011.x\eXpressApp Framework\Tools\EasyTest Right click the TestExecutor reference and set the “Copy Local” setting to True.  This forces the TestExecutor executable to be available in the bin directory which is where the EasyTest script will be executed further down in the process. Add an Application Configuration File (app.config) to your test application.  You will need to make a few modifications to have SpecFlow generate Microsoft style unit tests.  First add the section handler for SpecFlow and then set your choice of testing framework.  I prefer MS Tests for my projects. Add the EasyTest configuration file to your project.  Add a new XML file and call it Config.xml. Open the properties window for the Config.xml file and set the “Copy to Ouput Directory” to “Copy Always”. You will setup the Config file according to the specifications of the EasyTest library my mapping to your executable and other settings.  You can find the details for the configuration of EasyTest here.  My file looks like this Create a new folder in your test project called “StepDefinitions”.  Add a new SpecFlow Step Definition file item under the StepDefinitions folder.  I typically call this class StepDefinition.cs. Have your step definition inherit from the Liekhus.Testing.BDD.DevExpress.StepDefinition class.  This will give you the default behaviors for your test in the next section. OK.  Now that we have done this series of steps, we will work on simplifying this.  This is an early preview of this new project and is not fully ready for consumption.  If you would like to experiment with it, please feel free.  Our goals are to make this a installable project on it’s own with it’s own project templates and default settings.  This will be coming in later versions.  Currently this project is in Alpha release. Let’s write our first test Remove the basic test that is created for you. We will not use the default test but rather create our own SpecFlow “Feature” files. Add a new item to your project and select the SpecFlow Feature file under C#. Name your feature file as you do your class files after the test they are performing. Writing a feature file uses the Cucumber syntax of Given… When… Then.  Think of it in these terms.  Givens are the pre-conditions for the test.  The Whens are the actual steps for the test being performed.  The Thens are the verification steps that confirm your test either passed or failed.  All of these steps are generated into a an EasyTest format and executed against your XAF project.  You can find more on the Cucumber syntax by using the Secret Ninja Cucumber Scrolls.  This document has several good styles of tests, plus you can get your fill of Chuck Norris vs Ninjas.  Pretty humorous document but full of great content. My first test is going to test the entry of a new Album into the application and is outlined below. The Feature section at the top is more for your documentation purposes.  Try to be descriptive of the test so that it makes sense to the next person behind you.  The Scenario outline is described in the Ninja Scrolls, but think of it as test template.  You can write one test outline and have multiple datasets (Scenarios) executed against that test.  Here are the steps of my test and their descriptions Given I am starting a new test – tells our test to create a new EasyTest file And (Given) the application is open – tells EasyTest to open our application defined in the Config.xml When I am at the “Albums” screen – tells XAF to navigate to the Albums list view And (When) I click the “New:Album” button – tells XAF to click the New Album button on the ribbon And (When) I enter the following information – tells XAF to find the field on the screen and put the value in that field And (When) I click the “Save and Close” button – tells XAF to click the “Save and Close” button on the detail window Then I verify results as “user” – tells the testing framework to execute the EasyTest as your configured user Once you compile and prepare your tests you should see the following in your Test View.  For each of your CreateNewAlbum lines in your scenarios, you will see a new test ready to execute. From here you will use your testing framework of choice to execute the test.  This in turn will execute the EasyTest framework to call back into your XAF application and test your business application. Again, please remember that this is an early preview and we are still working out the details.  Please let us know if you have any comments/questions/concerns. Thanks and happy testing.

    Read the article

  • cannot connect with huawei e173 after upgrade to 12.10 using network manager

    - by user104195
    Since upgrade from 12.04 to 12.10 I can't connect to internet using mobile broadband modem Huawei e173. It worked earlier without problems and now it seems to be properly recognized (at least its connections appear in network manager applet), and after selecting connection manually it starts connection procedure. After about 20 seconds it returns to state disconnected. After browsing internet I've found that running network manager with: NM_PPP_DEBUG=1 /usr/sbin/NetworkManager --no-daemon After inserting modem I get: NetworkManager[507]: <warn> (ttyUSB2): failed to look up interface index NetworkManager[507]: <info> (ttyUSB2): new GSM/UMTS device (driver: 'option1' ifindex: 0) NetworkManager[507]: <info> (ttyUSB2): exported as /org/freedesktop/NetworkManager/Devices/2 NetworkManager[507]: <info> (ttyUSB2): now managed NetworkManager[507]: <info> (ttyUSB2): device state change: unmanaged -> unavailable (reason 'managed') [10 20 2] NetworkManager[507]: <info> (ttyUSB2): deactivating device (reason 'managed') [2] NetworkManager[507]: <info> (ttyUSB2): device state change: unavailable -> disconnected (reason 'none') [20 30 0] where 'failed to look up interface index' seems to be suspicious. After starting connecting: NetworkManager[507]: <info> Activation (ttyUSB2) starting connection 'Plus - Dostep standardowy' NetworkManager[507]: <info> (ttyUSB2): device state change: disconnected -> prepare (reason 'none') [30 40 0] NetworkManager[507]: <info> Activation (ttyUSB2) Stage 1 of 5 (Device Prepare) scheduled... NetworkManager[507]: <info> Activation (ttyUSB2) Stage 1 of 5 (Device Prepare) started... NetworkManager[507]: <info> (ttyUSB2): device state change: prepare -> need-auth (reason 'none') [40 60 0] NetworkManager[507]: <info> Activation (ttyUSB2) Stage 1 of 5 (Device Prepare) complete. NetworkManager[507]: <info> Activation (ttyUSB2) Stage 1 of 5 (Device Prepare) scheduled... NetworkManager[507]: <info> Activation (ttyUSB2) Stage 1 of 5 (Device Prepare) started... NetworkManager[507]: <info> (ttyUSB2): device state change: need-auth -> prepare (reason 'none') [60 40 0] NetworkManager[507]: <info> Activation (ttyUSB2) Stage 1 of 5 (Device Prepare) complete. NetworkManager[507]: <info> WWAN now enabled by management service NetworkManager[507]: <info> Activation (ttyUSB2) Stage 2 of 5 (Device Configure) scheduled... NetworkManager[507]: <info> Activation (ttyUSB2) Stage 2 of 5 (Device Configure) starting... NetworkManager[507]: <info> (ttyUSB2): device state change: prepare -> config (reason 'none') [40 50 0] NetworkManager[507]: <info> Activation (ttyUSB2) Stage 2 of 5 (Device Configure) successful. NetworkManager[507]: <info> Activation (ttyUSB2) Stage 3 of 5 (IP Configure Start) scheduled. NetworkManager[507]: <info> Activation (ttyUSB2) Stage 2 of 5 (Device Configure) complete. NetworkManager[507]: <info> Activation (ttyUSB2) Stage 3 of 5 (IP Configure Start) started... NetworkManager[507]: <info> (ttyUSB2): device state change: config -> ip-config (reason 'none') [50 70 0] NetworkManager[507]: <info> starting PPP connection NetworkManager[507]: <info> pppd started with pid 663 NetworkManager[507]: <info> Activation (ttyUSB2) Stage 4 of 5 (IPv6 Configure Timeout) scheduled... NetworkManager[507]: <info> Activation (ttyUSB2) Stage 3 of 5 (IP Configure Start) complete. NetworkManager[507]: <info> Activation (ttyUSB2) Stage 4 of 5 (IPv6 Configure Timeout) started... NetworkManager[507]: <info> Activation (ttyUSB2) Stage 4 of 5 (IPv6 Configure Timeout) complete. Plugin /usr/lib/pppd/2.4.5/nm-pppd-plugin.so loaded. ** Message: nm-ppp-plugin: (plugin_init): initializing ** Message: nm-ppp-plugin: (nm_phasechange): status 3 / phase 'serial connection' Removed stale lock on ttyUSB2 (pid 32146) using channel 23 NetworkManager[507]: SCPlugin-Ifupdown: devices added (path: /sys/devices/virtual/net/ppp0, iface: ppp0) NetworkManager[507]: SCPlugin-Ifupdown: device added (path: /sys/devices/virtual/net/ppp0, iface: ppp0): no ifupdown configuration found. NetworkManager[507]: <warn> /sys/devices/virtual/net/ppp0: couldn't determine device driver; ignoring... Using interface ppp0 Connect: ppp0 <--> /dev/ttyUSB2 ** Message: nm-ppp-plugin: (nm_phasechange): status 5 / phase 'establish' sent [LCP ConfReq id=0x1 <asyncmap 0x0> <magic 0x64b4024a> <pcomp> <accomp>] sent [LCP ConfReq id=0x1 <asyncmap 0x0> <magic 0x64b4024a> <pcomp> <accomp>] sent [LCP ConfReq id=0x1 <asyncmap 0x0> <magic 0x64b4024a> <pcomp> <accomp>] sent [LCP ConfReq id=0x1 <asyncmap 0x0> <magic 0x64b4024a> <pcomp> <accomp>] sent [LCP ConfReq id=0x1 <asyncmap 0x0> <magic 0x64b4024a> <pcomp> <accomp>] sent [LCP ConfReq id=0x1 <asyncmap 0x0> <magic 0x64b4024a> <pcomp> <accomp>] sent [LCP ConfReq id=0x1 <asyncmap 0x0> <magic 0x64b4024a> <pcomp> <accomp>] NetworkManager[507]: <warn> pppd timed out or didn't initialize our dbus module NetworkManager[507]: <info> Activation (ttyUSB2) Stage 4 of 5 (IPv4 Configure Timeout) scheduled... NetworkManager[507]: <info> Activation (ttyUSB2) Stage 4 of 5 (IPv4 Configure Timeout) started... NetworkManager[507]: <info> (ttyUSB2): device state change: ip-config -> failed (reason 'ip-config-unavailable') [70 120 5] NetworkManager[507]: <warn> Activation (ttyUSB2) failed for connection 'Plus - Dostep standardowy' NetworkManager[507]: <info> Activation (ttyUSB2) Stage 4 of 5 (IPv4 Configure Timeout) complete. NetworkManager[507]: <info> (ttyUSB2): device state change: failed -> disconnected (reason 'none') [120 30 0] NetworkManager[507]: <info> (ttyUSB2): deactivating device (reason 'none') [0] Terminating on signal 15 ** Message: nm-ppp-plugin: (nm_phasechange): status 10 / phase 'terminate' sent [LCP TermReq id=0x2 "User request"] NetworkManager[507]: SCPlugin-Ifupdown: devices removed (path: /sys/devices/virtual/net/ppp0, iface: ppp0) where repeated: sent [LCP ConfReq id=0x1 <asyncmap 0x0> <magic 0x64b4024a> <pcomp> <accomp>] last for about 20 seconds. I've tried to downgrade network manager but failed due to many dependencies. Can anyone point me to solution or tell what should I do to further investigate the problem?

    Read the article

  • How to set up secure cookie on weblogic server

    - by adejuanc
    WebLogic Server allows a user to securely access HTTPS resources in a session that was initiated using HTTP, without loss of session data. To enable this feature, add AuthCookieEnabled="true" to the WebServer element in config.xml: <WebServer Name="myserver" AuthCookieEnabled="true"/>Setting AuthCookieEnabled to true, which is the default setting, causes the WebLogic Server instance to send a new secure cookie, _WL_AUTHCOOKIE_JSESSIONID, to the browser when authenticating via an HTTPS connection. Once the secure cookie is set, the session is allowed to access other security-constrained HTTPS resources only if the cookie is sent from the browser.Thus, WebLogic Server uses two cookies: the JSESSIONID cookie and the _WL_AUTHCOOKIE_JSESSIONID cookie. By default, the JSESSIONID cookie is never secure, but the _WL_AUTHCOOKIE_JSESSIONID cookie is always secure. A secure cookie is only sent when an encrypted communication channel is in use. Assuming a standard HTTPS login (HTTPS is an encrypted HTTP connection), your browser gets both cookies.For subsequent HTTP access, you are considered authenticated if you have a valid JSESSIONID cookie, but for HTTPS access, you must have both cookies to be considered authenticated. If you only have the JSESSIONID cookie, you must re-authenticate.To configure on Admin Console : Log into WebLogic Admin Console. Under Domain Structure, press click on <domainname> Select the "Web Applications" tab Select "Lock and Edit" in change center. Click on  "Auth Cookie Enabled" checkbox. Restart to confirm changes. Test an application and view the cookie which got stored as "JSESSIONID" To Configure the Web application's weblogic-application.xml file: Run the following to extract the file from the web application's weblogic-application.xml: $PATH_JDK_HOME\binjar -xvf easy-web-examples.ear META-INF/weblogic-application.xml Add <cookie-secure>true</cookie-secure> between <session-descriptor> </session-descriptor> to the weblogic-application.xml. Run the following to repackage the file to the application: $PATH_JDK_HOME\bin\jar -uvf easy-web-examples.ear META-INF/weblogic-application.xml Deploy the application into WebLogic For further information, please read the documentation on "Using Secure Cookies to Prevent Session Stealing " : http://download.oracle.com/docs/cd/E12840_01/wls/docs103/security/thin_client.html#wp1053780

    Read the article

  • Stop Office 2010 Upload Center Icon from Displaying in the Taskbar

    - by Mysticgeek
    One of the new features in Office 2010 is the ability to upload your files to Office Web Apps. When you do, an Upload Center icon appears in the Taskbar and helps manage documents. Here’s how to stop it from showing up. If you’re running Office 2010 and upload files to the web, you’ll notice the Microsoft Office Upload Center Icon appears on the Taskbar in the Notification Area. It will stay there even after you’re done uploading the document and closed out of all Office apps. You can use this to monitor and control the documents you’re uploading to the web. Getting rid of it is fairly simple. Right-click the icon and select Settings. When the Microsoft Office Upload Center Settings window appears, under Display Options, uncheck Display icon in notification area and click OK. That is all there is to it…now it will no longer appear in the Taskbar.   After you upload your first document, it will also want to startup with Windows. You can go into msconfig and disable it from automatically starting up. If you need to access it again, it’s part of  Office 2010 Tools which you can access from the Start Menu. Or you can type upload center into the Search box in the Start Menu and hit Enter. If you upload a lot of work to Microsoft Web Apps you might find this tool useful, but if you only occasionally upload docs, you might be annoyed by it always being in the Taskbar. Similar Articles Productive Geek Tips Manage Sending 2010 Documents to the Web with Office Upload CenterHow To Manage Action Center in Windows 7What is Mobsync.exe and Why Is It Running?Taskbar Eliminator Does What the Name Implies: Hides Your Windows TaskbarDisable Office 2010 Beta Send-a-Smile from Startup TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Convert BMP, TIFF, PCX to Vector files with RasterVect Free Identify Fonts using WhatFontis.com Windows 7’s WordPad is Actually Good Greate Image Viewing and Management with Zoner Photo Studio Free Windows Media Player Plus! – Cool WMP Enhancer Get Your Team’s World Cup Schedule In Google Calendar

    Read the article

  • No GLX on Intel card with multiseat with additional nVidia card

    - by MeanEYE
    I have multiseat configured and my Xorg has 2 server layouts. One is for nVidia card and other is for Intel card. They both work, but display server assigned to Intel card doesn't have hardware acceleration since DRI and GLX module being used is from nVidia driver. So my question is, can I configure layouts somehow to use right DRI and GLX with each card? My Xorg.conf: Section "ServerLayout" Identifier "Default" Screen 0 "Screen0" 0 0 Option "Xinerama" "0" EndSection Section "ServerLayout" Identifier "TV" Screen 0 "Screen1" 0 0 Option "Xinerama" "0" EndSection Section "Monitor" # HorizSync source: edid, VertRefresh source: edid Identifier "Monitor0" VendorName "Unknown" ModelName "DELL E198WFP" HorizSync 30.0 - 83.0 VertRefresh 56.0 - 75.0 Option "DPMS" EndSection Section "Monitor" Identifier "Monitor1" VendorName "Unknown" Option "DPMS" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GT 610" EndSection Section "Device" Identifier "Device1" Driver "intel" BusID "PCI:0:2:0" Option "AccelMethod" "uxa" EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "Stereo" "0" Option "nvidiaXineramaInfoOrder" "DFP-1" Option "metamodes" "DFP-0: nvidia-auto-select +1440+0, DFP-1: nvidia-auto-select +0+0" SubSection "Display" Depth 24 EndSubSection EndSection Section "Screen" Identifier "Screen1" Device "Device1" Monitor "Monitor1" DefaultDepth 24 SubSection "Display" Depth 24 EndSubSection EndSection Log file for Intel: [ 18.239] X.Org X Server 1.13.0 Release Date: 2012-09-05 [ 18.239] X Protocol Version 11, Revision 0 [ 18.239] Build Operating System: Linux 2.6.24-32-xen x86_64 Ubuntu [ 18.239] Current Operating System: Linux bytewiper 3.5.0-18-generic #29-Ubuntu SMP Fri Oct 19 10:26:51 UTC 2012 x86_64 [ 18.239] Kernel command line: BOOT_IMAGE=/boot/vmlinuz-3.5.0-18-generic root=UUID=fc0616fd-f212-4846-9241-ba4a492f0513 ro quiet splash [ 18.239] Build Date: 20 September 2012 11:55:20AM [ 18.239] xorg-server 2:1.13.0+git20120920.70e57668-0ubuntu0ricotz (For technical support please see http://www.ubuntu.com/support) [ 18.239] Current version of pixman: 0.26.0 [ 18.239] Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. [ 18.239] Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. [ 18.239] (==) Log file: "/var/log/Xorg.1.log", Time: Wed Nov 21 18:32:14 2012 [ 18.239] (==) Using config file: "/etc/X11/xorg.conf" [ 18.239] (==) Using system config directory "/usr/share/X11/xorg.conf.d" [ 18.239] (++) ServerLayout "TV" [ 18.239] (**) |-->Screen "Screen1" (0) [ 18.239] (**) | |-->Monitor "Monitor1" [ 18.240] (**) | |-->Device "Device1" [ 18.240] (**) Option "Xinerama" "0" [ 18.240] (==) Automatically adding devices [ 18.240] (==) Automatically enabling devices [ 18.240] (==) Automatically adding GPU devices [ 18.240] (WW) The directory "/usr/share/fonts/X11/cyrillic" does not exist. [ 18.240] Entry deleted from font path. [ 18.240] (WW) The directory "/usr/share/fonts/X11/100dpi/" does not exist. [ 18.240] Entry deleted from font path. [ 18.240] (WW) The directory "/usr/share/fonts/X11/75dpi/" does not exist. [ 18.240] Entry deleted from font path. [ 18.240] (WW) The directory "/usr/share/fonts/X11/100dpi" does not exist. [ 18.240] Entry deleted from font path. [ 18.240] (WW) The directory "/usr/share/fonts/X11/75dpi" does not exist. [ 18.240] Entry deleted from font path. [ 18.240] (WW) The directory "/var/lib/defoma/x-ttcidfont-conf.d/dirs/TrueType" does not exist. [ 18.240] Entry deleted from font path. [ 18.240] (==) FontPath set to: /usr/share/fonts/X11/misc, /usr/share/fonts/X11/Type1, built-ins [ 18.240] (==) ModulePath set to "/usr/lib/x86_64-linux-gnu/xorg/extra-modules,/usr/lib/xorg/extra-modules,/usr/lib/xorg/modules" [ 18.240] (II) The server relies on udev to provide the list of input devices. If no devices become available, reconfigure udev or disable AutoAddDevices. [ 18.240] (II) Loader magic: 0x7f6917944c40 [ 18.240] (II) Module ABI versions: [ 18.240] X.Org ANSI C Emulation: 0.4 [ 18.240] X.Org Video Driver: 13.0 [ 18.240] X.Org XInput driver : 18.0 [ 18.240] X.Org Server Extension : 7.0 [ 18.240] (II) config/udev: Adding drm device (/dev/dri/card0) [ 18.241] (--) PCI: (0:0:2:0) 8086:0152:1043:84ca rev 9, Mem @ 0xf7400000/4194304, 0xd0000000/268435456, I/O @ 0x0000f000/64 [ 18.241] (--) PCI:*(0:1:0:0) 10de:104a:1458:3546 rev 161, Mem @ 0xf6000000/16777216, 0xe0000000/134217728, 0xe8000000/33554432, I/O @ 0x0000e000/128, BIOS @ 0x????????/524288 [ 18.241] (II) Open ACPI successful (/var/run/acpid.socket) [ 18.241] Initializing built-in extension Generic Event Extension [ 18.241] Initializing built-in extension SHAPE [ 18.241] Initializing built-in extension MIT-SHM [ 18.241] Initializing built-in extension XInputExtension [ 18.241] Initializing built-in extension XTEST [ 18.241] Initializing built-in extension BIG-REQUESTS [ 18.241] Initializing built-in extension SYNC [ 18.241] Initializing built-in extension XKEYBOARD [ 18.241] Initializing built-in extension XC-MISC [ 18.241] Initializing built-in extension SECURITY [ 18.241] Initializing built-in extension XINERAMA [ 18.241] Initializing built-in extension XFIXES [ 18.241] Initializing built-in extension RENDER [ 18.241] Initializing built-in extension RANDR [ 18.241] Initializing built-in extension COMPOSITE [ 18.241] Initializing built-in extension DAMAGE [ 18.241] Initializing built-in extension MIT-SCREEN-SAVER [ 18.241] Initializing built-in extension DOUBLE-BUFFER [ 18.241] Initializing built-in extension RECORD [ 18.241] Initializing built-in extension DPMS [ 18.241] Initializing built-in extension X-Resource [ 18.241] Initializing built-in extension XVideo [ 18.241] Initializing built-in extension XVideo-MotionCompensation [ 18.241] Initializing built-in extension XFree86-VidModeExtension [ 18.241] Initializing built-in extension XFree86-DGA [ 18.241] Initializing built-in extension XFree86-DRI [ 18.241] Initializing built-in extension DRI2 [ 18.241] (II) LoadModule: "glx" [ 18.241] (II) Loading /usr/lib/x86_64-linux-gnu/xorg/extra-modules/libglx.so [ 18.247] (II) Module glx: vendor="NVIDIA Corporation" [ 18.247] compiled for 4.0.2, module version = 1.0.0 [ 18.247] Module class: X.Org Server Extension [ 18.247] (II) NVIDIA GLX Module 310.19 Thu Nov 8 01:12:43 PST 2012 [ 18.247] Loading extension GLX [ 18.247] (II) LoadModule: "intel" [ 18.248] (II) Loading /usr/lib/xorg/modules/drivers/intel_drv.so [ 18.248] (II) Module intel: vendor="X.Org Foundation" [ 18.248] compiled for 1.13.0, module version = 2.20.13 [ 18.248] Module class: X.Org Video Driver [ 18.248] ABI class: X.Org Video Driver, version 13.0 [ 18.248] (II) intel: Driver for Intel Integrated Graphics Chipsets: i810, i810-dc100, i810e, i815, i830M, 845G, 854, 852GM/855GM, 865G, 915G, E7221 (i915), 915GM, 945G, 945GM, 945GME, Pineview GM, Pineview G, 965G, G35, 965Q, 946GZ, 965GM, 965GME/GLE, G33, Q35, Q33, GM45, 4 Series, G45/G43, Q45/Q43, G41, B43, B43, Clarkdale, Arrandale, Sandybridge Desktop (GT1), Sandybridge Desktop (GT2), Sandybridge Desktop (GT2+), Sandybridge Mobile (GT1), Sandybridge Mobile (GT2), Sandybridge Mobile (GT2+), Sandybridge Server, Ivybridge Mobile (GT1), Ivybridge Mobile (GT2), Ivybridge Desktop (GT1), Ivybridge Desktop (GT2), Ivybridge Server, Ivybridge Server (GT2), Haswell Desktop (GT1), Haswell Desktop (GT2), Haswell Desktop (GT2+), Haswell Mobile (GT1), Haswell Mobile (GT2), Haswell Mobile (GT2+), Haswell Server (GT1), Haswell Server (GT2), Haswell Server (GT2+), Haswell SDV Desktop (GT1), Haswell SDV Desktop (GT2), Haswell SDV Desktop (GT2+), Haswell SDV Mobile (GT1), Haswell SDV Mobile (GT2), Haswell SDV Mobile (GT2+), Haswell SDV Server (GT1), Haswell SDV Server (GT2), Haswell SDV Server (GT2+), Haswell ULT Desktop (GT1), Haswell ULT Desktop (GT2), Haswell ULT Desktop (GT2+), Haswell ULT Mobile (GT1), Haswell ULT Mobile (GT2), Haswell ULT Mobile (GT2+), Haswell ULT Server (GT1), Haswell ULT Server (GT2), Haswell ULT Server (GT2+), Haswell CRW Desktop (GT1), Haswell CRW Desktop (GT2), Haswell CRW Desktop (GT2+), Haswell CRW Mobile (GT1), Haswell CRW Mobile (GT2), Haswell CRW Mobile (GT2+), Haswell CRW Server (GT1), Haswell CRW Server (GT2), Haswell CRW Server (GT2+), ValleyView PO board [ 18.248] (++) using VT number 8 [ 18.593] (II) intel(0): using device path '/dev/dri/card0' [ 18.593] (**) intel(0): Depth 24, (--) framebuffer bpp 32 [ 18.593] (==) intel(0): RGB weight 888 [ 18.593] (==) intel(0): Default visual is TrueColor [ 18.593] (**) intel(0): Option "AccelMethod" "uxa" [ 18.593] (--) intel(0): Integrated Graphics Chipset: Intel(R) Ivybridge Desktop (GT1) [ 18.593] (**) intel(0): Relaxed fencing enabled [ 18.593] (**) intel(0): Wait on SwapBuffers? enabled [ 18.593] (**) intel(0): Triple buffering? enabled [ 18.593] (**) intel(0): Framebuffer tiled [ 18.593] (**) intel(0): Pixmaps tiled [ 18.593] (**) intel(0): 3D buffers tiled [ 18.593] (**) intel(0): SwapBuffers wait enabled ... [ 20.312] (II) Module fb: vendor="X.Org Foundation" [ 20.312] compiled for 1.13.0, module version = 1.0.0 [ 20.312] ABI class: X.Org ANSI C Emulation, version 0.4 [ 20.312] (II) Loading sub module "dri2" [ 20.312] (II) LoadModule: "dri2" [ 20.312] (II) Module "dri2" already built-in [ 20.312] (==) Depth 24 pixmap format is 32 bpp [ 20.312] (II) intel(0): [DRI2] Setup complete [ 20.312] (II) intel(0): [DRI2] DRI driver: i965 [ 20.312] (II) intel(0): Allocated new frame buffer 1920x1080 stride 7680, tiled [ 20.312] (II) UXA(0): Driver registered support for the following operations: [ 20.312] (II) solid [ 20.312] (II) copy [ 20.312] (II) composite (RENDER acceleration) [ 20.312] (II) put_image [ 20.312] (II) get_image [ 20.312] (==) intel(0): Backing store disabled [ 20.312] (==) intel(0): Silken mouse enabled [ 20.312] (II) intel(0): Initializing HW Cursor [ 20.312] (II) intel(0): RandR 1.2 enabled, ignore the following RandR disabled message. [ 20.313] (**) intel(0): DPMS enabled [ 20.313] (==) intel(0): Intel XvMC decoder enabled [ 20.313] (II) intel(0): Set up textured video [ 20.313] (II) intel(0): [XvMC] xvmc_vld driver initialized. [ 20.313] (II) intel(0): direct rendering: DRI2 Enabled [ 20.313] (==) intel(0): hotplug detection: "enabled" [ 20.332] (--) RandR disabled [ 20.335] (EE) Failed to initialize GLX extension (Compatible NVIDIA X driver not found) [ 20.335] (II) intel(0): Setting screen physical size to 508 x 285 [ 20.338] (II) XKB: reuse xkmfile /var/lib/xkb/server-B20D7FC79C7F597315E3E501AEF10E0D866E8E92.xkm [ 20.340] (II) config/udev: Adding input device Power Button (/dev/input/event1) [ 20.340] (**) Power Button: Applying InputClass "evdev keyboard catchall" [ 20.340] (II) LoadModule: "evdev" [ 20.340] (II) Loading /usr/lib/xorg/modules/input/evdev_drv.so

    Read the article

  • Windows Azure Recipe: High Performance Computing

    - by Clint Edmonson
    One of the most attractive ways to use a cloud platform is for parallel processing. Commonly known as high-performance computing (HPC), this approach relies on executing code on many machines at the same time. On Windows Azure, this means running many role instances simultaneously, all working in parallel to solve some problem. Doing this requires some way to schedule applications, which means distributing their work across these instances. To allow this, Windows Azure provides the HPC Scheduler. This service can work with HPC applications built to use the industry-standard Message Passing Interface (MPI). Software that does finite element analysis, such as car crash simulations, is one example of this type of application, and there are many others. The HPC Scheduler can also be used with so-called embarrassingly parallel applications, such as Monte Carlo simulations. Whatever problem is addressed, the value this component provides is the same: It handles the complex problem of scheduling parallel computing work across many Windows Azure worker role instances. Drivers Elastic compute and storage resources Cost avoidance Solution Here’s a sketch of a solution using our Windows Azure HPC SDK: Ingredients Web Role – this hosts a HPC scheduler web portal to allow web based job submission and management. It also exposes an HTTP web service API to allow other tools (including Visual Studio) to post jobs as well. Worker Role – typically multiple worker roles are enlisted, including at least one head node that schedules jobs to be run among the remaining compute nodes. Database – stores state information about the job queue and resource configuration for the solution. Blobs, Tables, Queues, Caching (optional) – many parallel algorithms persist intermediate and/or permanent data as a result of their processing. These fast, highly reliable, parallelizable storage options are all available to all the jobs being processed. Training Here is a link to online Windows Azure training labs where you can learn more about the individual ingredients described above. (Note: The entire Windows Azure Training Kit can also be downloaded for offline use.) Windows Azure HPC Scheduler (3 labs)  The Windows Azure HPC Scheduler includes modules and features that enable you to launch and manage high-performance computing (HPC) applications and other parallel workloads within a Windows Azure service. The scheduler supports parallel computational tasks such as parametric sweeps, Message Passing Interface (MPI) processes, and service-oriented architecture (SOA) requests across your computing resources in Windows Azure. With the Windows Azure HPC Scheduler SDK, developers can create Windows Azure deployments that support scalable, compute-intensive, parallel applications. See my Windows Azure Resource Guide for more guidance on how to get started, including links web portals, training kits, samples, and blogs related to Windows Azure.

    Read the article

  • WebCenter Customer Spotlight: Los Angeles Department of Water and Power

    - by me
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter  Solution Summary Los Angeles Department of Water and Power (LADWP) is the largest public utility company in the United States with over 1.6 million customers. LADWP provides water and power for millions of residential & commercial customers in Southern California. The goal of the project was to implement a newly designed web portal to increase customer self-service while reducing transactions via IVR and automate many of the paper based processes to web based workflows for their 1.6 million customers. LADWP implemented a Self Service Portal using Oracle WebCenter Portal & Oracle WebCenter Content and Oracle SOA Suite for the integration of their complex back-end systems infrastructure. The new portal has received extremely positive feedback from not only the customers and users of the portal, but also other utilities. At Oracle OpenWorld 2012, LADWP won the prestigious WebCenter innovation award for their innovative solution. Company OverviewLos Angeles Department of Water and Power (LADWP) is the largest public utility company in the United States with over 1.6 million customers. LADWP provides water and power for millions of residential & commercial customers in Southern California. LADWP also bills most of these customers for sanitation services provided by another department in the city of Los Angeles.  Business ChallengesThe goal of the project was to implement a newly designed web portal that is easy to navigate from a web browser and mobile devices, as well as be the platform for surfacing internet and intranet applications at LADWP. The primary objective of the new portal was to increase customer self-service while reducing the transactions via IVR and walk-up and to automate many of the paper based processes to web based workflows for customers. This includes automation of Self Service implemented through My Account (Bill Pay, Payment History, Bill History, Usage analysis, Service Request Management) Financial Assistance Programs Customer Rebate Programs Turn Off/Turn On/Transfer of Services Outage Reporting eNotification (SMS, email) Solution DeployedLADWP implemented a Self Service Portal using Oracle WebCenter Portal & Oracle WebCenter Content. Using Oracle SOA Suite they integrated various back-end systems including Oracle Siebel CRM IBM Mainframe based CIS FILENET for document management EBP Eletronic Bill Payment System HP Imprint System for BillXML data Other systems including outage reporting systems, SMS service, etc. The new portal’s features include: Complete Graphical redesign based on best practices in UI Design for high usability Customer Self Service implemented through MyAccount (Bill Pay, Payment History, Bill History, Usage Analysis, Service Request Management) Financial Assistance Programs (CRM, WebCenter) Customer Rebate Programs (CRM, WebCenter) Turn On/Off/Transfer of services (Commercial & Residential) Outage Reporting eNotification (SMS, email) Multilingual (English & Spanish) – using WebCenter multi-language support Section 508 (ADA) Compliant Search – Using WebCenter SES (Secured Enterprise Search) Distributed Authorship in WebCenter Content Mobile Access (any Mobile Browser) Business ResultsThe new portal has received extremely positive feedback from not only customers and users of the portal, but also other utilities. At Oracle OpenWorld 2012, LADWP won the prestigious WebCenter innovation award for their innovative solution. Additional Information LADWP OpenWorld presentation Oracle WebCenter Portal Oracle WebCenter Content Oracle SOA Suite

    Read the article

  • Is ASP.NET MVC completely (and exclusively) based on conventions?

    - by Mike Valeriano
    --TL;DR Is there a "Hello World!" ASP.NET MVC tutorial out there that doesn't rely on conventions and "stock" projects? Is it even possible to take advantage of the technology without reusing the default file structure, and start from a single "hello_world.asp" file or something (like in PHP)? Am I completely mistaken and I should be looking somewhere else, maybe this? I'm interested in the MVC framework, not Web Forms --Background I've played a bit with PHP in the past, just for fun, and now I'm back to it since web development became relevant for me once again. I'm no professional, but I try to gain as much knowledge and control over the technology I'm working with as possible. I'm using Visual Studio 2012 for C# - my "desktop" language of choice - and since I got the Professional Edition from Dreamspark, the Web Development Tools are available, including ASP.NET MVC 4. I won't touch Web Forms, but the MVC Framework got my attention because the MVC pattern is something I can really relate to, since it provides the control I want but... not quite. Learning PHP was easy - and right form the start I could just create a "hello_world.php" file and just do something like this for immediate results: <!-- file: hello_world.php --> <?php> echo "Hello World!"; <?> But I couldn't find a single ASP.NET (MVC) tutorial out there (I'll be sure to buy one of the upcoming MVC 4 books, only a month away or so) that would start like that. They all start with a sample project, building up knowledge from the basics and heavily using conventions as they go along. Which is fine, I suppose, but it's now the best way for me to learn things. Even the "Empty" project template for a new ASP.NET MVC 4 Application in VS2012 is not empty at all: several files and folders are created for you - much like a new C# desktop application project, but with C# I can in fact start from scratch, creating the project structure myself. It is not the case with PHP: I can choose from a plethora of different MVC frameworks I can just create my own framework I can just skip frameworks altogether, and toss random PHP along with my HTML on a single file and make it work I understand the framework needs to establish some rules, but what if I just want to create a single page website with some C# logic behind it? Do I really need to create a whole bloat of files and folders for the sake of convention? Also, please understand that I haven't gotten far on any of those tutorials mainly because of this reason, but, if that's the only way to do it, I'll go for it using one of the books I've mentioned before. This is my first contact with ASP.NET but from the few comparisons I've read, I believe I should stay the hell away from Web Forms. Thank you. (Please forgive the broken English - it is not my primary language.)

    Read the article

  • Devoxx 2011: Java EE 6 Hands-on Lab Delivered

    - by arungupta
    I, along with Alexis's help, delivered a Java EE 6 hands-on lab to a packed room of about 40+ attendees at Devoxx 2011. The lab was derived from the OTN Developer Days 2012 version but added lot more content to showcase several Java EE 6 technologies. The problem statement from the lab document states: This hands-on lab builds a typical 3-tier Java EE 6 Web application that retrieves customer information from a database and displays it in a Web page. The application also allows new customers to be added to the database as well. The string-based and type-safe queries are used to query and add rows to the database. Each row in the database table is published as a RESTful resource and is then accessed programmatically. Typical design patterns required by a Web application like validation, caching, observer, partial page rendering, and cross-cutting concerns like logging are explained and implemented using different Java EE 6 technologies. The lab covered Java Persistence API 2, Servlet 3, Enterprise JavaBeans 3.1, JavaServer Faces 2, Java API for RESTful Web Services 1.1, Contexts and Dependency Injection 1.0, and Bean Validation 1.0 over 47 pages of detailed self-paced instructions. Here is the complete Table of Contents: The lab can be downloaded from here and requires only NetBeans IDE "All" or "Java EE" version, which includes GlassFish anyway. All the feedback received from the lab has been incorporated in the instructions and bugs filed (Updated 49559, 205232, 205248, 205256). 80% of the attendees could easily complete the lab and some even completed in much less than 3 hours. That indicates that either more content needs to be added to the lab or the intellectual level of the attendees at the conference was pretty high. I think the lab has enough content for 3 hours but we moved at a much more faster pace so I conclude on the latter. Truly a joy to conduct a lab to 40 Devoxxians! Another related lab that might be handy for folks is "Develop, Deploy, and Monitor your Java EE 6 applications using GlassFish 3.1 Cluster". It explains how: Create a 2-instance GlassFish cluster Front-end with a Web server and a load balancer Demonstrate session replication and fail over Monitor the application using JavaScript The complete lab instructions and source code are available and you can try them. I plan to continue evolving the contents for the Java EE 6 hands-on lab to cover more technologies and features and will announce them on this blog. Let me know on what else would you like to see in the future versions.

    Read the article

  • Devoxx 2011: Java EE 6 Hands-on Lab Delivered

    - by arungupta
    I, along with Alexis's help, delivered a Java EE 6 hands-on lab to a packed room of about 40+ attendees at Devoxx 2011. The lab was derived from the OTN Developer Days 2012 version but added lot more content to showcase several Java EE 6 technologies. The problem statement from the lab document states: This hands-on lab builds a typical 3-tier Java EE 6 Web application that retrieves customer information from a database and displays it in a Web page. The application also allows new customers to be added to the database as well. The string-based and type-safe queries are used to query and add rows to the database. Each row in the database table is published as a RESTful resource and is then accessed programmatically. Typical design patterns required by a Web application like validation, caching, observer, partial page rendering, and cross-cutting concerns like logging are explained and implemented using different Java EE 6 technologies. The lab covered Java Persistence API 2, Servlet 3, Enterprise JavaBeans 3.1, JavaServer Faces 2, Java API for RESTful Web Services 1.1, Contexts and Dependency Injection 1.0, and Bean Validation 1.0 over 47 pages of detailed self-paced instructions. Here is the complete Table of Contents: The lab can be downloaded from here and requires only NetBeans IDE "All" or "Java EE" version, which includes GlassFish anyway. All the feedback received from the lab has been incorporated in the instructions and bugs filed (Updated 49559, 205232, 205248, 205256). 80% of the attendees could easily complete the lab and some even completed in much less than 3 hours. That indicates that either more content needs to be added to the lab or the intellectual level of the attendees at the conference was pretty high. I think the lab has enough content for 3 hours but we moved at a much more faster pace so I conclude on the latter. Truly a joy to conduct a lab to 40 Devoxxians! Another related lab that might be handy for folks is "Develop, Deploy, and Monitor your Java EE 6 applications using GlassFish 3.1 Cluster". It explains how: Create a 2-instance GlassFish cluster Front-end with a Web server and a load balancer Demonstrate session replication and fail over Monitor the application using JavaScript The complete lab instructions and source code are available and you can try them. I plan to continue evolving the contents for the Java EE 6 hands-on lab to cover more technologies and features and will announce them on this blog. Let me know on what else would you like to see in the future versions.

    Read the article

< Previous Page | 646 647 648 649 650 651 652 653 654 655 656 657  | Next Page >