Search Results

Search found 16948 results on 678 pages for 'static analysis'.

Page 381/678 | < Previous Page | 377 378 379 380 381 382 383 384 385 386 387 388  | Next Page >

  • Dynamic LINQ in an Assembly Near By

    - by Ricardo Peres
    You may recall my post on Dynamic LINQ. I said then that you had to download Microsoft's samples and compile the DynamicQuery project (or just grab my copy), but there's another way. It turns out Microsoft included the Dynamic LINQ classes in the System.Web.Extensions assembly, not the one from ASP.NET 2.0, but the one that was included with ASP.NET 3.5! The only problem is that all types are private: Here's how to use it: Assembly asm = typeof(UpdatePanel).Assembly; Type dynamicExpressionType = asm.GetType("System.Web.Query.Dynamic.DynamicExpression"); MethodInfo parseLambdaMethod = dynamicExpressionType.GetMethods(BindingFlags.Public | BindingFlags.Static).Where(m = (m.Name == "ParseLambda") && (m.GetParameters().Length == 2)).Single().MakeGenericMethod(typeof(DateTime), typeof(Boolean)); Func filterExpression = (parseLambdaMethod.Invoke(null, new Object [] { "Year == 2010", new Object [ 0 ] }) as Expression).Compile(); List list = new List { new DateTime(2010, 1, 1), new DateTime(1999, 1, 12), new DateTime(1900, 10, 10), new DateTime(1900, 2, 20), new DateTime(2012, 5, 5), new DateTime(2012, 1, 20) }; IEnumerable filteredDates = list.Where(filterExpression); SyntaxHighlighter.config.clipboardSwf = 'http://alexgorbatchev.com/pub/sh/2.0.320/scripts/clipboard.swf'; SyntaxHighlighter.brushes.CSharp.aliases = ['c#', 'c-sharp', 'csharp']; SyntaxHighlighter.all();

    Read the article

  • Configuration setting of HttpWebRequest.Timeout value

    - by Michael Freidgeim
    I wanted to set in configuration on client HttpWebRequest.Timeout.I was surprised, that MS doesn’t provide it as a part of .Net configuration.(Answer in http://forums.silverlight.net/post/77818.aspx thread: “Unfortunately specifying the timeout is not supported in current version. We may support it in the future release.”) I added it to appSettings section of app.config and read it in the method of My HttpWebRequestHelper class  //The Method property can be set to any of the HTTP 1.1 protocol verbs: GET, HEAD, POST, PUT, DELETE, TRACE, or OPTIONS.        public static HttpWebRequest PrepareWebRequest(string sUrl, string Method, CookieContainer cntnrCookies)        {            HttpWebRequest webRequest = WebRequest.Create(sUrl) as HttpWebRequest;            webRequest.Method = Method;            webRequest.ContentType = "application/x-www-form-urlencoded";            webRequest.CookieContainer = cntnrCookies; webRequest.Timeout = ConfigurationExtensions.GetAppSetting("HttpWebRequest.Timeout", 100000);//default 100sec-http://blogs.msdn.com/b/buckh/archive/2005/02/01/365127.aspx)            /*                //try to change - from http://www.codeproject.com/csharp/ClientTicket_MSNP9.asp                                  webRequest.AllowAutoRedirect = false;                       webRequest.Pipelined = false;                        webRequest.KeepAlive = false;                        webRequest.ProtocolVersion = new Version(1,0);//protocol 1.0 works better that 1.1 ??            */            //MNF 26/5/2005 Some web servers expect UserAgent to be specified            //so let's say it's IE6            webRequest.UserAgent = "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0; .NET CLR 1.1.4322)";            DebugOutputHelper.PrintHttpWebRequest(webRequest, TraceOutputHelper.LineWithTrace(""));            return webRequest;        }Related link:http://stackoverflow.com/questions/387247/i-need-help-setting-net-httpwebrequest-timeoutv

    Read the article

  • World Record Oracle Business Intelligence Benchmark on SPARC T4-4

    - by Brian
    Oracle's SPARC T4-4 server configured with four SPARC T4 3.0 GHz processors delivered the first and best performance of 25,000 concurrent users on Oracle Business Intelligence Enterprise Edition (BI EE) 11g benchmark using Oracle Database 11g Release 2 running on Oracle Solaris 10. A SPARC T4-4 server running Oracle Business Intelligence Enterprise Edition 11g achieved 25,000 concurrent users with an average response time of 0.36 seconds with Oracle BI server cache set to ON. The benchmark data clearly shows that the underlying hardware, SPARC T4 server, and the Oracle BI EE 11g (11.1.1.6.0 64-bit) platform scales within a single system supporting 25,000 concurrent users while executing 415 transactions/sec. The benchmark demonstrated the scalability of Oracle Business Intelligence Enterprise Edition 11g 11.1.1.6.0, which was deployed in a vertical scale-out fashion on a single SPARC T4-4 server. Oracle Internet Directory configured on SPARC T4 server provided authentication for the 25,000 Oracle BI EE users with sub-second response time. A SPARC T4-4 with internal Solid State Drive (SSD) using the ZFS file system showed significant I/O performance improvement over traditional disk for the Web Catalog activity. In addition, ZFS helped get past the UFS limitation of 32767 sub-directories in a Web Catalog directory. The multi-threaded 64-bit Oracle Business Intelligence Enterprise Edition 11g and SPARC T4-4 server proved to be a successful combination by providing sub-second response times for the end user transactions, consuming only half of the available CPU resources at 25,000 concurrent users, leaving plenty of head room for increased load. The Oracle Business Intelligence on SPARC T4-4 server benchmark results demonstrate that comprehensive BI functionality built on a unified infrastructure with a unified business model yields best-in-class scalability, reliability and performance. Oracle BI EE 11g is a newer version of Business Intelligence Suite with richer and superior functionality. Results produced with Oracle BI EE 11g benchmark are not comparable to results with Oracle BI EE 10g benchmark. Oracle BI EE 11g is a more difficult benchmark to run, exercising more features of Oracle BI. Performance Landscape Results for the Oracle BI EE 11g version of the benchmark. Results are not comparable to the Oracle BI EE 10g version of the benchmark. Oracle BI EE 11g Benchmark System Number of Users Response Time (sec) 1 x SPARC T4-4 (4 x SPARC T4 3.0 GHz) 25,000 0.36 Results for the Oracle BI EE 10g version of the benchmark. Results are not comparable to the Oracle BI EE 11g version of the benchmark. Oracle BI EE 10g Benchmark System Number of Users 2 x SPARC T5440 (4 x SPARC T2+ 1.6 GHz) 50,000 1 x SPARC T5440 (4 x SPARC T2+ 1.6 GHz) 28,000 Configuration Summary Hardware Configuration: SPARC T4-4 server 4 x SPARC T4-4 processors, 3.0 GHz 128 GB memory 4 x 300 GB internal SSD Storage Configuration: "> Sun ZFS Storage 7120 16 x 146 GB disks Software Configuration: Oracle Solaris 10 8/11 Oracle Solaris Studio 12.1 Oracle Business Intelligence Enterprise Edition 11g (11.1.1.6.0) Oracle WebLogic Server 10.3.5 Oracle Internet Directory 11.1.1.6.0 Oracle Database 11g Release 2 Benchmark Description Oracle Business Intelligence Enterprise Edition (Oracle BI EE) delivers a robust set of reporting, ad-hoc query and analysis, OLAP, dashboard, and scorecard functionality with a rich end-user experience that includes visualization, collaboration, and more. The Oracle BI EE benchmark test used five different business user roles - Marketing Executive, Sales Representative, Sales Manager, Sales Vice-President, and Service Manager. These roles included a maximum of 5 different pre-built dashboards. Each dashboard page had an average of 5 reports in the form of a mix of charts, tables and pivot tables, returning anywhere from 50 rows to approximately 500 rows of aggregated data. The test scenario also included drill-down into multiple levels from a table or chart within a dashboard. The benchmark test scenario uses a typical business user sequence of dashboard navigation, report viewing, and drill down. For example, a Service Manager logs into the system and navigates to his own set of dashboards using Service Manager. The BI user selects the Service Effectiveness dashboard, which shows him four distinct reports, Service Request Trend, First Time Fix Rate, Activity Problem Areas, and Cost Per Completed Service Call spanning 2002 to 2005. The user then proceeds to view the Customer Satisfaction dashboard, which also contains a set of 4 related reports, drills down on some of the reports to see the detail data. The BI user continues to view more dashboards – Customer Satisfaction and Service Request Overview, for example. After navigating through those dashboards, the user logs out of the application. The benchmark test is executed against a full production version of the Oracle Business Intelligence 11g Applications with a fully populated underlying database schema. The business processes in the test scenario closely represent a real world customer scenario. See Also SPARC T4-4 Server oracle.com OTN Oracle Business Intelligence oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN WebLogic Suite oracle.com OTN Oracle Solaris oracle.com OTN Disclosure Statement Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 30 September 2012.

    Read the article

  • Adding Facebook IPv6 to Centos, getting CurlException 7

    - by Nick
    I'm correctly get following error. After searching about this issue, correct me if i'm wrong, I believe that adding/configuring IPv6 should solve the problem. PHP Fatal error: Uncaught CurlException: 7: Failed to connect to 2a03:2880:10:8f02:face:b00c:0:26: Network is unreachable\n thrown in /var/www/vhosts/facedex.net/httpdocs/fb/apps/seemyfuture/src/base_facebook.php on line 886 The problem is I dont know the right way to add it. There seems to have may methods. http://tldp.org/HOWTO/Linux+IPv6-HOWTO/x1035.html#AEN1044 http://unix.stackexchange.com/questions/34093/static-ipv4-ipv6-configuration-on-centos-6-2 My netstat show this. Shell doesnt recogize -rn6 though.It shows invalid option -- 6 netstat -rn Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 27.254.38.128 0.0.0.0 255.255.255.128 U 0 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0 0.0.0.0 27.254.38.254 0.0.0.0 UG 0 0 0 eth0 FYI: I'm using Centos 5.7. Thank you a lot in advance.

    Read the article

  • Smarter, Faster, Cheaper: The Insurance Industry’s Dream

    - by Jenna Danko
    On June 3rd, I saw the Gaylord Resort Centre in Washington D.C. become the hub of C level executives and managers of insurance carriers for the IASA 2013 Conference.  Insurance Accounting/Regulation and Technology sessions took the focus, but there were plenty of tertiary sessions for career development, which complemented the overall strong networking side of the conference.  As an exhibitor, Oracle, along with several hundred other product providers, welcomed the opportunity to display and demonstrate our solutions and we were encouraged by hustle and bustle of the exhibition floor.  The IASA organizers had pre-arranged fast track tours whereby interested conference delegates could sign up for a series of like-themed presentations from Vendors, giving them a level of 'Speed Dating' introductions to possible solutions and services.  Oracle participated in a number of these, which were very well subscribed.  Clearly, the conference had a strong business focus; however, attendees saw technology as a key enabler to get their processes done smarter, faster and cheaper.  As we navigated through the exhibition, it became clear from the inquiries that came to us that insurance carriers are gravitating to a number of focus areas: Navigating the maze of upcoming regulatory reporting changes. For US carriers with European holdings, Solvency II carries a myriad of rules and reporting requirements. Alignment across the globe of the Own Risk and Solvency Assessment (ORSA) processes brings to the fore the National Insurance of Insurance commissioners' (NAIC) recent guidance manual publication. Doing more with less and to certainly expect more from technology for less dollars. The overall cost of IT, in particular hardware, has dropped in real terms (though the appetite for more has risen: more CPU, more RAM, more storage), but software has seen less change. Clearly, customers expect either to pay less or get a lot more from their software solutions for the same buck. Doing things smarter – A recognition that with the advance of technology to stand still no longer means you are technically going backwards. Technology and, in particular technology interactions with human business processes, has undergone incredible change over the past 5 years. Consumer usage (iPhones, etc.) has been at the forefront, but now at the Enterprise level ever more effective technology exploitation is beginning to take place. That data and, in particular gleaning knowledge from data, is refining and improving business processes.  Organizations are now consuming more data than ever before, and it is set to grow exponentially for some time to come.  Amassing large volumes of data is one thing, but effectively analyzing that data is another.  It is the results of such analysis that leads to improvements both in terms of insurance product offerings and the processes to support them. Regulatory Compliance, damned if you do and damned if you don’t! Clearly, around the globe at lot is changing from a regulatory perspective and it is evident that in terms of regulatory requirements, whilst there is a greater convergence across jurisdictions bringing uniformity, there is also a lot of work to be done in the next 5 years. Just like the big data, hidden behind effective regulatory compliance there often lies golden nuggets that can give competitive advantages. From Oracle's perspective, our Rating Engine, Billing, Document Management and Insurance Analytics solutions on display served to strike up good conversations and, as is always the case at conferences, it was a great opportunity to meet and speak with existing Oracle customers that we might not have otherwise caught up with for a while. Fortunately, I was able to catch up on a few sessions at the close of the Exhibition.  The speaker quality was high and the audience asked challenging, but pertinent, questions.  During Dr. Jackie Freiberg’s keynote “Bye Bye Business as Usual,” the author discussed 8 strategies to help leaders create a culture where teams consistently deliver innovative ideas by disrupting the status quo.  The very first strategy: Get wired for innovation.  Freiberg admitted that folks in the insurance and financial services industry understand and know innovation is important, but oftentimes they are slow adopters.  Today, technology and innovation go hand in hand. In speaking to delegates during and after the conference, a high degree of satisfaction could be measured from their positive comments of speaker sessions and the exhibitors. I suspect many will be back in 2014 with Indianapolis as the conference location. Did you attend the IASA Conference in Washington D.C.?  If so, I would love to hear your comments. Andrew Collins is the Director, Solvency II of Oracle Financial Services. He can be reached at andrew.collins AT oracle.com.

    Read the article

  • ECMP Load Balancing in JUNOS

    - by SpacemanSpiff
    I'm trying to figure out how to use ECMP load balancing in JUNOS. I know this isn't the best way to load balance, but its quick and dirty and gets done what I need to. In ScreenOS this was pretty easy. Device: SRX220 JunOS: 10.3R2.11 Here's what I've got so far: routing-options { static { route 0.0.0.0/0 { next-hop [ 1.1.1.1 1.1.1.2 ]; metric 10; } } maximum-paths 2; Will that do it? Tom

    Read the article

  • PostgreSQL 9.1 Database Replication Between Two Production Environments with Load Balancer

    - by littleK
    I'm investigating different solutions for database replication between two PostgreSQL 9.1 databases. The setup will include two production servers on the cloud (Amazon EC2 X-Large Instances), with an elastic load balancer. What is the typical database implementation for for this type of setup? A master-master replication (with Bucardo or rubyrep)? Or perhaps use only one shared database between the two environments, with a shared disk failover? I've been getting some ideas from http://www.postgresql.org/docs/9.0/static/different-replication-solutions.html. Since I don't have a lot of experience in database replication, I figured I would ask the experts. What would you recommend for the described setup?

    Read the article

  • Cocos2d-x v3 invalid conversion from 'cocos2d::Layer* [on hold]

    - by Hammerh5
    Hello guys I'm learning cocos2d-x v3 right but most of the code that I can find is to the version 2. My specific error is this one, when I try to compile my cocos2s-x 3 project this error shows. invalid conversion from 'cocos2d::Layer to Game* [-fpermisive]* What I want to do is create a new game scene in the following code: //Game.cpp #include "Game.h" Scene* Game::scene() { scene *sc = CCScene::create(); sc->setTag(TAG_GAME_SCENE); const Game *g = Game::create(); //Here is where the conversions fails. sc->addChild(g, 0, TAG_GAME_LAYER); return sc; } Of course this is my header file //Game.h #include "cocos2d.h" #include "Mole.h" #include "AppDelegate.h" using namespace cocos2d; class Game: public cocos2d::Layer { cocos2d::CCArray *moles; float timeBetweenMoles, timeElapsed, increaseMolesAtTime, increaseElapsed, lastMoleHiTime; int molesAtOnce; cocos2d::CCSize s; bool isPaused; public: CCString *missSound, *hitSound; static cocos2d::Scene* scene(); virtual bool init(); void showMole(); void initializeGame(); void onEnterTransitionDidFinish(); void onExit(); void onTouchesBegan(const std::vector<cocos2d::Touch *> &touches, cocos2d::Event *event); void tick(float dt); cocos2d::CCArray* getMoles(bool isUp); //LAYER_CREATE_FUNC(Game); }; #endif /* GAME_H_ */ I don't know what's wrong I suppose this code works fine in Cocos2d-x v2. It's maybe some changes in the C++ version ?

    Read the article

  • How to structure a XML-based order form using ASP.NET

    - by Brendan
    First question here; please help me if I'm doing something wrong. I'm a graphic designer who's trying to teach himself ASP.NET/C#. My server-side background is PHP/WordPress and some ASP Classic, and when I do code I've hand-coded just about everything since I started learning HTML. So, as I've started to learn .NET, my code has been very manual and procedural. I'm now trying to create a really basic order form that pulls from an XML file to populate the form; there's an image, a title, a price, and selectable quantities. If I was making this form as a static HTML file, I'd have each field named manually and so on postback I could query each field to get the values. But I'm trying to do this dynamically so that I can add/remove items from the form and not have to change the code. In terms of displaying the XML, I rolled my own by loading XmlDocument and using XmlNodeList and a bunch of foreach loops to get things displayed. Then, I learned about <asp:XmlDataSource> and <asp:Repeater>, which made displaying the XML simpler by a large margin. However, I've had a really hard time getting the data that's been submitted on postback (it was implied on SO that there are better ways to get data than nested RepeaterItems). So, what I've learned so far is that you can do things a bunch of different ways in .NET. that's why I thought it'd be good to ask for answers regarding the best way to use ASP.NET to display a XML document and dynamically capture the data that's submitted. Any help is appreciated! I'm using Notepad++ to code .NET 2.0.

    Read the article

  • Allow Incoming Responses from Curl On Ubuntu 11.10 - Curl

    - by Daniel Adarve
    I'm trying to get a Curl Response from an outside server, however I noticed I cant neither PING the server in question nor connect to it. I tried disabling the iptables firewall but I had no success. My server is running behind a Cisco Linksys WRTN310N Router with the DD-wrt firmware Installed. In which I already disabled the firewall. Here are my network settings: Ifconfig eth0 Link encap:Ethernet HWaddr 00:26:b9:76:73:6b inet addr:192.168.1.120 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::226:b9ff:fe76:736b/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:49713 errors:0 dropped:0 overruns:0 frame:0 TX packets:30987 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:52829022 (52.8 MB) TX bytes:5438223 (5.4 MB) Interrupt:16 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:341 errors:0 dropped:0 overruns:0 frame:0 TX packets:341 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:27604 (27.6 KB) TX bytes:27604 (27.6 KB) /etc/resolv.conf nameserver 192.168.1.1 /etc/nsswitch.com passwd: compat group: compat shadow: compat hosts: files dns networks: files protocols: db files services: db files ethers: db files rpc: db files netgroup: nis /etc/host.conf order hosts,bind multi on /etc/hosts 127.0.0.1 localhost 127.0.0.1 callcenter # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters /etc/network/interfaces # The loopback network interface auto lo iface lo inet loopback # The primary network interface auto eth0 iface eth0 inet static address 192.168.1.120 netmask 255.255.255.0 network 192.168.1.1 broadcast 192.168.1.255 gateway 192.168.1.1 The Url to which im trying to get a connection to is https://www.veripayment.com/integration/index.php When I ping it on terminal heres what I get daniel@callcenter:~$ ping www.veripayment.com PING www.veripayment.com (69.172.200.5) 56(84) bytes of data. --- www.veripayment.com ping statistics --- 2 packets transmitted, 0 received, 100% packet loss, time 1007ms Thanks in Advance

    Read the article

  • A Gentle .NET touch to Unix Touch

    - by lavanyadeepak
    A Gentle .NET touch to Unix Touch The Unix world has an elegant utility called 'touch' which would modify the timestamp of the file whose path is being passed an argument to  it. Unfortunately, we don't have a quick and direct such tool in Windows domain. However, just a few lines of code in C# can fill this gap to embrace and rejuvenate any file in the file system, subject to access ACL restrictions with the current timestamp.   using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.IO; namespace LavanyaDeepak.Utilities { class Touch { static void Main(string[] args) { if (args.Length < 1) { Console.WriteLine("Please specify the path of the file to operate upon."); return; } if (!File.Exists(args[0])) { try { FileAttributes objFileAttributes = File.GetAttributes(args[0]); if ((objFileAttributes & FileAttributes.Directory) == FileAttributes.Directory) { Console.WriteLine("The input was not a regular file."); return; } } catch { } Console.WriteLine("The file does not seem to be exist."); return; } try { File.SetLastWriteTime(args[0], DateTime.Now); Console.WriteLine("The touch completed successfully"); } catch (System.UnauthorizedAccessException exUnauthException) { Console.WriteLine("Unable to touch file. Access is denied. The security manager responded: " + exUnauthException.Message); } catch (IOException exFileAccessException) { Console.WriteLine("Unable to touch file. The IO interface failed to complete request and responded: " + exFileAccessException.Message); } catch (Exception exGenericException) { Console.WriteLine("Unable to touch file. An internal error occured. The details are: " + exGenericException.Message); } } } }

    Read the article

  • Download Microsoft MSDN Magazine PDF Issues For Offline Reading

    - by Kavitha
    MSDN Magazine is a must read for every Microsoft developer. It provides in-depth analysis and excellent guides on all the latest Microsoft development tools and technologies. Every month one can grab this magazine on the stands or read it online for free. What if you want to read the magazine offline on your PC or mobile devices? Just grab a PDF version of the magazine and read it whenever you want. The PDF version of MSDN magazines are very handy for travellers who don’t get access to internet always. In this post we are going to provide you links to download PDF version, source code and online version of every month MSDN Magazine issue starting from 2010. Bookmark this post and keep checking it monthly to get access to MSDN Magazine links. December 2010 Issue    Download PDF(not yet available)    Download Source Code    Read Magazine Online        November 2010 Issue    Download PDF (not yet available)    Read Magazine Online    Download Source Code       October 2010 Issue    Download PDF    Download Source Code    Read Magazine Online        September 2010 Issue    Download PDF    Download Source Code    Read Magazine Online       August 2010 Issue    Download PDF    Download Source Code    Read Magazine Online       July 2010 Issue    Download PDF    Download Source Code    Read Magazine Online       June 2010 Issue    Download PDF    Download Source Code    Read Magazine Online       May 2010 Issue      Download PDF    Download Source Code    Read Magazine Online       April 2010 Issue    Download PDF    Read Magazine Online    Download Source Code       March 2010 Issue    Download PDF    Download Source Code    Read Magazine Online       February 2010 Issue    Download PDF    Download Source Code    Read Magazine Online       January 2010 Issue    Download PDF    Download Source Code    Read Magazine Online This article titled,Download Microsoft MSDN Magazine PDF Issues For Offline Reading, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • How to get IIS7 to release a locked file?

    - by Jarrod Dixon
    During our production builds, a very large (10 megabyte) static content file in the root directory will sometimes be locked by IIS and cannot be deleted by the clean task. This is presumably because it is being actively served to one or more clients at the time. The build process stops the website before cleaning via c:\Windows\System32\inetsrv\appcmd.exe stop site http://oursite.com However, this does not release the file - we have to restart IIS to get the process to relinquish its lock. appcmd.exe allows you to take IIS down completely; we do not want to do this! Are there any other ways to get IIS to let go of a locked file, without restarting IIS? Simply stopping and starting the individual website is definitely not working to release the file lock.

    Read the article

  • mysql jdbc got ArrayIndexOutOfBoundsException when database name length = 9

    - by Thang Hoang
    this code below will throw : Exception in thread "main" java.sql.SQLException: Unable to connect to any hosts due to exception: java.lang.ArrayIndexOutOfBoundsException: 40 mysql 5.1, jdbc driver 5.1.21 if I change connection string to any database have name's lengh != 9, it will pass to print 'connected'. or I create other database as '123456789' it throw same exception. I connect to other database on amazon s3, that have same name length, it throw java.lang.ArrayIndexOutOfBoundsException: 43. this database version is 'mysql Ver 14.14 Distrib 5.5.28, for debian-linux-gnu (i686) using readline 6.2 ' any idea of this weird mysql behavior, thanks public class MysqlConnection { public static void main(String[] args) throws Exception { Connection conn = null; String userName = "root"; String password = "123456"; String url = "jdbc:mysql://localhost:3306/test12345"; Class.forName ("com.mysql.jdbc.Driver").newInstance (); conn = DriverManager.getConnection (url,userName, password); System.out.println ("Connected"); } }

    Read the article

  • SQL Authority News – Play by Play with Pinal Dave – A Birthday Gift

    - by Pinal Dave
    Today is my birthday. Personal Note When I was young, I was always looking forward to my birthday as on this day, I used to get gifts from everybody. Now when I am getting old on each of my birthday, I have almost same feeling but the direction is different. Now on each of my birthday, I feel like giving gifts to everybody. I have received lots of support, love and respect from everybody; and now I must return it back.Well, on this birthday, I have very unique gifts for everybody – my latest course on SQL Server. How I Tune Performance I often get questions where I am asked how do I work on a normal day. I am often asked that how do I work when I have performance tuning project is assigned to me. Lots of people have expressed their desire that they want me to explain and demonstrate my own method of solving performance problem when I am facing real world problem. It is a pretty difficult task as in the real world, nothing goes as planned and usually planned demonstrations have no place there. The real world, demands real solutions and in a timely fashion. If a consultant goes to industry and does not demonstrate his/her capabilities in very first few minutes, it does not matter how much fame he/she is, the door is shown to them eventually. It is true and in my early career, I have faced it quite commonly. I have learned the trick to be honest from the start and request absolutely transparent communication from the organization where I am to consult. Play by Play Play by Play is a very unique setup. It is not planned and it is a step by step course. It is like a reality show – a very real encounter to the problem and real problem solving approach. I had a great time doing this course. Geoffrey Grosenbach (VP of Pluralsight) sits down with me to see what a SQL Server Admin does in the real world. This Play-by-Play focuses on SQL Server performance tuning and I go over optimizing queries and fine-tuning the server. The table of content of this course is very simple. Introduction In the introduction I explained my basic strategies when I am approached by a customer for performance tuning. Basic Information Gathering In this module I explain how I do gather various information for performance tuning project. It is very crucial to demonstrate to customers for consultant his capability of solving problem. I attempt to resolve a small problem which gives a big positive impact on performance, consultant have to gather proper information from the start. I demonstrate in this module, how one can collect all the important performance tuning metrics. Removing Performance Bottleneck In this module, I build upon the previous module’s statistics collected. I analysis various performance tuning measures and immediately start implementing various tweaks on the performance, which will start improving the performance of my server. This is a very effective method and it gives immediate return of efforts. Index Optimization Indexes are considered as a silver bullet for performance tuning. However, it is not true always there are plenty of examples where indexes even performs worst after implemented. The key is to understand a few of the basic properties of the index and implement the right things at the right time. In this module, I describe in detail how to do index optimizations and what are right and wrong with Index. If you are a DBA or developer, and if your application is running slow – this is must attend module for you. I have some really interesting stories to tell as well. Optimize Query with Rewrite Every problem has more than one solution, in this module we will see another very famous, but hard to master skills for performance tuning – Query Rewrite. There are few do’s and don’ts for any query rewrites. I take a very simple example and demonstrate how query rewrite can improve the performance of the query at many folds. I also share some real world funny stories in this module. This course is hosted at Pluralsight. You will need a valid login for Pluralsight to watch  Play by Play: Pinal Dave course. You can also sign up for FREE Trial of Pluralsight to watch this course. As today is my birthday – I will give 10 people (randomly) who will express their desire to learn this course, a free code. Please leave your comment and I will send you free code to watch this course for free. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Training, SQLAuthority News, T SQL, Video

    Read the article

  • And the Winners of Fusion Middleware Innovation Awards in Data Integration are…

    - by Irem Radzik
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} At OpenWorld, we announced the winners of Fusion Middleware Innovation Awards 2012. Raymond James and Morrison Supermarkets were selected for the data integration category for their innovative use of Oracle’s data integration products and the great results they have achieved. In this blog I would like to briefly introduce you to these award winning projects. Raymond James is a diversified financial services company, which provides financial planning, wealth management, investment banking, and asset management. They are using Oracle GoldenGate and Oracle Data Integrator to feed their operational data store (ODS), which supports application services across the enterprise. A major requirement for their project was low data latency, as key decisions are made based on the data in the ODS. They were able to fulfill this requirement due to the Oracle Data Integrator’s integrated solution with Oracle GoldenGate. Oracle GoldenGate captures changed data from different systems including Oracle Database, HP NonStop and Microsoft SQL Server into a single data store on SQL Server 2008. Oracle Data Integrator provides data transformations for the ODS. Leveraging ODI’s integration with GoldenGate, Raymond James now sees a 9 second median latency (from source commit to ODS target commit). The ODS solution delivers high quality, accurate data for consuming applications such as Raymond James’ next generation client and portfolio management systems as well as real-time operational reporting. It enables timely information for making better decisions. There are more benefits Raymond James achieved with this implementation of Oracle’s data integration solution. The software developers and architects of this solution, Tim Garrod and Ryan Fonnett, have told us during their presentation at OpenWorld that they also reduced application complexity significantly while improving developer productivity through trusted operational services. They were able to utilize CDC to generate alerts for business users, and for applications (for example for cache hydration mechanisms). One cool innovation example among many in this project is that using ODI's flexible architecture, Tim and Ryan could build 24/7 self-healing processes. And these processes have hardly failed. Integration processes fixes the errors itself. Pretty amazing; and a great solution for environments that need such reliability and availability. (You can see Tim and Ryan’s photo with the Innovation Award above.) The other winner of this year in the data integration category, Morrison Supermarkets, is the UK’s 4th largest grocery retailer. The company has been migrating all their legacy applications on to a new-world application set based on Oracle and consolidating all BI on to a single Oracle platform. The company recently implemented Oracle Exadata as the data warehouse engine and uses Oracle Business Intelligence EE. Their goal with deploying GoldenGate and ODI was to provide BI data to the enterprise in a way that it also supports operational decision making requirements from a wide range of Oracle based ERP applications such as E-Business Suite, PeopleSoft, Oracle Retail Suite. They use GoldenGate’s log-based change data capture capabilities and Oracle Data Integrator to populate the Oracle Retail Data Model. The electronic point of sale (EPOS) integration solution they built processes over 80 million transactions/day at busy periods in near real time (15 mins). It provides valuable insight to Retail and Commercial teams for both intra-day and historical trend analysis. As I mentioned in yesterday’s blog, the right data integration platform can transform the business. Here is another example: The point-of-sale integration enabled the grocery chain to optimize its stock management, leading to another award: Morrisons won the Grocer 33 award in 2012 - beating all other major UK supermarkets in product availability. Congratulations, Morrisons,on another award! Celebrating the innovation and the success of our customers with Oracle’s data integration products was definitely a highlight of Oracle OpenWorld for me. I look forward to hearing more from Raymond James, Morrisons, and the other customers that presented their data integration projects at OpenWorld, on how they are creating more value for their organizations.

    Read the article

  • Juniper Strategy, LLC is hiring SharePoint Developers&hellip;

    - by Mark Rackley
    Isn’t everybody these days? It seems as though there are definitely more jobs than qualified devs these days, but yes, we are looking for a few good devs to help round out our burgeoning SharePoint team. Juniper Strategy is located in the DC area, however we will consider remote devs for the right fit. This is your chance to get in on the ground floor of a bright company that truly “gets it” when it comes to SharePoint, Project Management, and Information Assurance. We need like-minded people who “get it”, enjoy it, and who are looking for more than just a job. We have government and commercial opportunities as well as our own internal product that has a bright future of its own. Our immediate needs are for SharePoint .NET developers, but feel free to submit your resume for us to keep on file as it looks as though we’ll need several people in the coming months. Please email us your resume and salary requirements to [email protected] Below are our official job postings. Thanks for stopping by, we look forward to  hearing from you. Senior SharePoint .NET Developer Senior developer will focus on design and coding of custom, end-to-end business process solutions within the SharePoint framework. Senior developer with the ability to serve as a senior developer/mentor and manage day-to-day development tasks. Work with business consultants and clients to gather requirements to prepare business functional specifications. Analyze and recommend technical/development alternative paths based on business functional specifications. For selected development path, prepare technical specification and build the solution. Assist project manager with defining development task schedule and level-of-effort. Lead technical solution deployment. Job Requirements Minimum of 7 years experience in agile development, with at least 3 years of SharePoint-related development experience (SPS, SharePoint 2007/2010, WSS2-4). Thorough understanding of and demonstrated experience in development under the SharePoint Object Model, with focus on the WSS 3.0 foundation (MOSS 2007 Standard/Enterprise, Project Server 2007). Experience with using multiple data sources/repositories for database CRUD activities, including relational databases, SAP, Oracle e-Business. Experience with designing and deploying performance-based solutions in SharePoint for business processes that involve a very large number of records. Experience designing dynamic dashboards and mashups with data from multiple sources (internal to SharePoint as well as from external sources). Experience designing custom forms to facilitate user data entry, both with and without leveraging Forms Services. Experience building custom web part solutions. Experience with designing custom solutions for processing underlying business logic requirements including, but not limited to, SQL stored procedures, C#/ASP.Net workflows/event handlers (including timer jobs) to support multi-tiered decision trees and associated computations. Ability to create complex solution packages for deployment (e.g., feature-stapled site definitions). Must have impeccable communication skills, both written and verbal. Seeking a "tinkerer"; proactive with a thirst for knowledge (and a sense of humor). A US Citizen is required, and need to be able to pass NAC/E-Verify. An active Secret clearance is preferred. Applicants must pass a skills assessment test. MCP/MCTS or comparable certification preferred. Salary & Travel Negotiable SharePoint Project Lead Define project task schedule, work breakdown structure and level-of-effort. Serve as principal liaison to the customer to manage deliverables and expectations. Day-to-day project and team management, including preparation and maintenance of project plans, budgets, and status reports. Prepare technical briefings and presentation decks, provide briefs to C-level stakeholders. Work with business consultants and clients to gather requirements to prepare business functional specifications. Analyze and recommend technical/development alternative paths based on business functional specifications. The SharePoint Project Lead will be working with SharePoint architects and system owners to perform requirements/gap analysis and develop the underlying functional specifications. Once we have functional specifications as close to "final" as possible, the Project Lead will be responsible for preparation of the associated technical specification/development blueprint, along with assistance in preparing IV&V/test plan materials with support from other team members. This person will also be responsible for day-to-day management of "developers", but is also expected to engage in development directly as needed.  Job Requirements Minimum 8 years of technology project management across the software development life-cycle, with a minimum of 3 years of project management relating specifically to SharePoint (SPS 2003, SharePoint2007/2010) and/or Project Server. Thorough understanding of and demonstrated experience in development under the SharePoint Object Model, with focus on the WSS 3.0 foundation (MOSS 2007 Standard/Enterprise, Project Server 2007). Ability to interact and collaborate effectively with team members and stakeholders of different skill sets, personalities and needs. General "development" skill set required is a fundamental understanding of MOSS 2007 Enterprise, SP1/SP2, from the top-level of skinning to the core of the SharePoint object model. Impeccable communication skills, both written and verbal, and a sense of humor are required. The projects will require being at a client site at least 50% of the time in Washington DC (NW DC) and Maryland (near Suitland). A US Citizen is required, and need to be able to pass NAC/E-Verify. An active Secret clearance is preferred. PMP certification, PgMP preferred. Salary & Travel Negotiable

    Read the article

  • Metro: Promises

    - by Stephen.Walther
    The goal of this blog entry is to describe the Promise class in the WinJS library. You can use promises whenever you need to perform an asynchronous operation such as retrieving data from a remote website or a file from the file system. Promises are used extensively in the WinJS library. Asynchronous Programming Some code executes immediately, some code requires time to complete or might never complete at all. For example, retrieving the value of a local variable is an immediate operation. Retrieving data from a remote website takes longer or might not complete at all. When an operation might take a long time to complete, you should write your code so that it executes asynchronously. Instead of waiting for an operation to complete, you should start the operation and then do something else until you receive a signal that the operation is complete. An analogy. Some telephone customer service lines require you to wait on hold – listening to really bad music – until a customer service representative is available. This is synchronous programming and very wasteful of your time. Some newer customer service lines enable you to enter your telephone number so the customer service representative can call you back when a customer representative becomes available. This approach is much less wasteful of your time because you can do useful things while waiting for the callback. There are several patterns that you can use to write code which executes asynchronously. The most popular pattern in JavaScript is the callback pattern. When you call a function which might take a long time to return a result, you pass a callback function to the function. For example, the following code (which uses jQuery) includes a function named getFlickrPhotos which returns photos from the Flickr website which match a set of tags (such as “dog” and “funny”): function getFlickrPhotos(tags, callback) { $.getJSON( "http://api.flickr.com/services/feeds/photos_public.gne?jsoncallback=?", { tags: tags, tagmode: "all", format: "json" }, function (data) { if (callback) { callback(data.items); } } ); } getFlickrPhotos("funny, dogs", function(data) { $.each(data, function(index, item) { console.log(item); }); }); The getFlickr() function includes a callback parameter. When you call the getFlickr() function, you pass a function to the callback parameter which gets executed when the getFlicker() function finishes retrieving the list of photos from the Flickr web service. In the code above, the callback function simply iterates through the results and writes each result to the console. Using callbacks is a natural way to perform asynchronous programming with JavaScript. Instead of waiting for an operation to complete, sitting there and listening to really bad music, you can get a callback when the operation is complete. Using Promises The CommonJS website defines a promise like this (http://wiki.commonjs.org/wiki/Promises): “Promises provide a well-defined interface for interacting with an object that represents the result of an action that is performed asynchronously, and may or may not be finished at any given point in time. By utilizing a standard interface, different components can return promises for asynchronous actions and consumers can utilize the promises in a predictable manner.” A promise provides a standard pattern for specifying callbacks. In the WinJS library, when you create a promise, you can specify three callbacks: a complete callback, a failure callback, and a progress callback. Promises are used extensively in the WinJS library. The methods in the animation library, the control library, and the binding library all use promises. For example, the xhr() method included in the WinJS base library returns a promise. The xhr() method wraps calls to the standard XmlHttpRequest object in a promise. The following code illustrates how you can use the xhr() method to perform an Ajax request which retrieves a file named Photos.txt: var options = { url: "/data/photos.txt" }; WinJS.xhr(options).then( function (xmlHttpRequest) { console.log("success"); var data = JSON.parse(xmlHttpRequest.responseText); console.log(data); }, function(xmlHttpRequest) { console.log("fail"); }, function(xmlHttpRequest) { console.log("progress"); } ) The WinJS.xhr() method returns a promise. The Promise class includes a then() method which accepts three callback functions: a complete callback, an error callback, and a progress callback: Promise.then(completeCallback, errorCallback, progressCallback) In the code above, three anonymous functions are passed to the then() method. The three callbacks simply write a message to the JavaScript Console. The complete callback also dumps all of the data retrieved from the photos.txt file. Creating Promises You can create your own promises by creating a new instance of the Promise class. The constructor for the Promise class requires a function which accepts three parameters: a complete, error, and progress function parameter. For example, the code below illustrates how you can create a method named wait10Seconds() which returns a promise. The progress function is called every second and the complete function is not called until 10 seconds have passed: (function () { "use strict"; var app = WinJS.Application; function wait10Seconds() { return new WinJS.Promise(function (complete, error, progress) { var seconds = 0; var intervalId = window.setInterval(function () { seconds++; progress(seconds); if (seconds > 9) { window.clearInterval(intervalId); complete(); } }, 1000); }); } app.onactivated = function (eventObject) { if (eventObject.detail.kind === Windows.ApplicationModel.Activation.ActivationKind.launch) { wait10Seconds().then( function () { console.log("complete") }, function () { console.log("error") }, function (seconds) { console.log("progress:" + seconds) } ); } } app.start(); })(); All of the work happens in the constructor function for the promise. The window.setInterval() method is used to execute code every second. Every second, the progress() callback method is called. If more than 10 seconds have passed then the complete() callback method is called and the clearInterval() method is called. When you execute the code above, you can see the output in the Visual Studio JavaScript Console. Creating a Timeout Promise In the previous section, we created a custom Promise which uses the window.setInterval() method to complete the promise after 10 seconds. We really did not need to create a custom promise because the Promise class already includes a static method for returning promises which complete after a certain interval. The code below illustrates how you can use the timeout() method. The timeout() method returns a promise which completes after a certain number of milliseconds. WinJS.Promise.timeout(3000).then( function(){console.log("complete")}, function(){console.log("error")}, function(){console.log("progress")} ); In the code above, the Promise completes after 3 seconds (3000 milliseconds). The Promise returned by the timeout() method does not support progress events. Therefore, the only message written to the console is the message “complete” after 10 seconds. Canceling Promises Some promises, but not all, support cancellation. When you cancel a promise, the promise’s error callback is executed. For example, the following code uses the WinJS.xhr() method to perform an Ajax request. However, immediately after the Ajax request is made, the request is cancelled. // Specify Ajax request options var options = { url: "/data/photos.txt" }; // Make the Ajax request var request = WinJS.xhr(options).then( function (xmlHttpRequest) { console.log("success"); }, function (xmlHttpRequest) { console.log("fail"); }, function (xmlHttpRequest) { console.log("progress"); } ); // Cancel the Ajax request request.cancel(); When you run the code above, the message “fail” is written to the Visual Studio JavaScript Console. Composing Promises You can build promises out of other promises. In other words, you can compose promises. There are two static methods of the Promise class which you can use to compose promises: the join() method and the any() method. When you join promises, a promise is complete when all of the joined promises are complete. When you use the any() method, a promise is complete when any of the promises complete. The following code illustrates how to use the join() method. A new promise is created out of two timeout promises. The new promise does not complete until both of the timeout promises complete: WinJS.Promise.join([WinJS.Promise.timeout(1000), WinJS.Promise.timeout(5000)]) .then(function () { console.log("complete"); }); The message “complete” will not be written to the JavaScript Console until both promises passed to the join() method completes. The message won’t be written for 5 seconds (5,000 milliseconds). The any() method completes when any promise passed to the any() method completes: WinJS.Promise.any([WinJS.Promise.timeout(1000), WinJS.Promise.timeout(5000)]) .then(function () { console.log("complete"); }); The code above writes the message “complete” to the JavaScript Console after 1 second (1,000 milliseconds). The message is written to the JavaScript console immediately after the first promise completes and before the second promise completes. Summary The goal of this blog entry was to describe WinJS promises. First, we discussed how promises enable you to easily write code which performs asynchronous actions. You learned how to use a promise when performing an Ajax request. Next, we discussed how you can create your own promises. You learned how to create a new promise by creating a constructor function with complete, error, and progress parameters. Finally, you learned about several advanced methods of promises. You learned how to use the timeout() method to create promises which complete after an interval of time. You also learned how to cancel promises and compose promises from other promises.

    Read the article

  • Unique Business Value vs. Unique IT

    - by barry.perkins
    When the age of computing started, technology was new, exciting, full of potential and had a long way to grow. Vendor architectures were proprietary, and limited in function at first, growing in capability and complexity over time. There were few if any "standards", let alone "open standards" and the concepts of "open systems", and "open architectures" were far in the future. Companies employed intelligent, talented and creative people to implement the best possible solutions for their company. At first, those solutions were "unique" to each company. As time progressed, standards emerged, companies shared knowledge, business capability supplied by technology grew, and companies continued to expand their use of technology. Taking advantage of change required companies to struggle through periodic "revolutionary" change cycles, struggling through costly changes that were fraught with risk, resulted in solutions with an increasingly shorter half-life, and frequently required altering existing business processes and retraining employees and partner businesses. The pace of technological invention and implementation grew at an ever increasing rate, making the "revolutionary" approach based upon "proprietary" or "closed" architectures or technologies no longer viable. Concurrent with the advancement of technology, the rate of change in business increased, leading us to the incredibly fast paced, highly charged, and competitive global economy that we have today, where the most successful companies are companies that are good at implementing, leveraging and exploiting change. Fast forward to today, a world where dramatic changes in business and technology happen continually, a world where "evolutionary" change is crucial. Companies can no longer afford to build "unique IT", nor can they afford regular intervals of "revolutionary" change, with the associated costs and risks. Human ingenuity was once again up to the task, turning technology into a platform supporting business through evolutionary change, by employing "open": open standards; open systems; open architectures; and open solutions. Employing "open", enables companies to implement systems based upon technology, capability and standards that will evolve over time, providing a solid platform upon which a company can drive business needs, requirements, functions, and processes down into the technology, rather than exposing technology to the business, allowing companies to focus on providing "unique business value" rather than "unique IT". The big question! Does moving from "older" technology that no longer meets the needs of today's business, to new "open" technology require yet another "revolutionary change"? A "revolutionary" change with a short half-life, camouflaging reality with great marketing? The answer is "perhaps". With the endless options available to choose from, it is entirely possible to implement a solution that may work well today, but in 5 years time will become yet another albatross for the company to bear. Some solutions may look good today, solving a budget challenge by reducing cost, or solving a specific tactical challenge, but result in highly complex environments, that may be difficult to manage and maintain and limit the future potential of your business. Put differently, some solutions might push today's challenge into the future, resulting in a more complex and expensive solution. There is no such thing as a "1 size fits all" IT solution for business. If all companies implemented business solutions based upon technology that required, or forced the same business processes across all businesses in an industry, it would be extremely difficult to show competitive advantage through "unique business value". It would be equally difficult to "evolve" to meet or exceed business needs and keep up with today's rapid pace of change. How does one ensure that they do not jump from one trap directly into another? Or to put it positively, there are solutions available today that can address these challenges and issues. How does one ensure that the buying decision of today will serve the business well for years into the future? Intelligent & Informed decisions - "buying right" In a previous blog entry, we discussed the value of linking tactical to strategic The key is driving the focus to what is best for your business, handling today's tactical issues while also aligning with a roadmap/strategy that is tightly aligned with your strategic business objectives. When considering the plethora of possible options that provide various approaches to solving today's complex business problems, it is extremely important to ensure that vendors supplying those options, focus on what is best for your business, supplying sufficient information, providing adequate answers to questions, addressing challenges, issues, concerns and objections honestly and openly, and focus on supplying solutions that are tailored for, and deliver the most business value possible for your business. Here are a few questions to consider relative to the proposed options that should help ensure that today's solution doesn't become tomorrow's problem. Do the proposed solutions: Solve the problem(s) you are trying to address? Provide a solid foundation upon which to grow/enhance your business? Provide tactical gains that align with and enable your strategic business goals/objectives? Provide an infrastructure that can be leveraged with subsequent projects? Solve problems for the business overall, the lines of business, or just IT? Simplify your current environment Provide the basis for business: Efficiency Agility Clarity governance, risk, compliance real time business visibility and trend analysis Does your IT staff have the knowledge/experience to successfully manage the proposed systems once they are deployed in production? Done well, you will be presented with options tailored to your business, that enable you to drive the "unique business value" necessary to help your business stand out from others, creating a distinct competitive advantage, delivering what your customers need, when they need it, so you can attract new customers, new business, and grow top line revenue, all at a cost that provides a strong Return on Investment/Return on Assets. The net result is growth with managed cost providing significantly improved profit margin and shareholder value.

    Read the article

  • Java-JDBC-MySQL Error

    - by LeonardPeris
    I'm trying to get my java program to talk to a MySQL DB. So i did some reading and downloaded MySQL Connector/J. I've extracted it into my home directory ~. Here are the contents. user@hamster:~$ ls LoadDriver.class LoadDriver.java mysql-connector-java-5.1.18-bin.jar The contents of LoadDriver.java are user@hamster:~$ cat LoadDriver.java import java.sql.Connection; import java.sql.DriverManager; import java.sql.SQLException; // Notice, do not import com.mysql.jdbc.* // or you will have problems! public class LoadDriver { public static void main(String[] args) { try { // The newInstance() call is a work around for some // broken Java implementations Class.forName("com.mysql.jdbc.Driver").newInstance(); } catch (Exception ex) { System.out.println(ex); } } } The contents are the same from http://dev.mysql.com/doc/refman/5.1/en/connector-j-usagenotes-basic.html#connector-j-usagenotes-connect-drivermanager with the only change that the Exception is being printed to console in the catch block. I compile it as follows leonard@hamster:~$ javac LoadDriver.java When I try to execute it, the following is the ouput. leonard@hamster:~$ java LoadDriver java.lang.ClassNotFoundException: com.mysql.jdbc.Driver This output is consistent with the executing command, but when trying to run it with the prescribed CLASSPATH method I run into the following issue. leonard@hamster:~$ java -cp /home/leonard/mysql-connector-java-5.1.18-bin.jar LoadDriver Exception in thread "main" java.lang.NoClassDefFoundError: LoadDriver Caused by: java.lang.ClassNotFoundException: LoadDriver at java.net.URLClassLoader$1.run(URLClassLoader.java:217) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:205) at java.lang.ClassLoader.loadClass(ClassLoader.java:321) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:294) at java.lang.ClassLoader.loadClass(ClassLoader.java:266) Could not find the main class: LoadDriver. Program will exit. Am I missing something? How do I get MySQL's own code samples running.

    Read the article

  • Trying to install apache 2.4.10 with openssl 1.0.1i

    - by AlexMA
    I need to install apache 2.4.10 using openssl 1.0.1i. I compiled openssl from source with: $ ./config \ --prefix=/opt/openssl-1.0.1e \ --openssldir=/opt/openssl-1.0.1e $ make $ sudo make install and apache with: ./configure --prefix=/etc/apache2 \ --enable-access_compat=shared \ --enable-actions=shared \ --enable-alias=shared \ --enable-allowmethods=shared \ --enable-auth_basic=shared \ --enable-authn_core=shared \ --enable-authn_file=shared \ --enable-authz_core=shared \ --enable-authz_groupfile=shared \ --enable-authz_host=shared \ --enable-authz_user=shared \ --enable-autoindex=shared \ --enable-dir=shared \ --enable-env=shared \ --enable-headers=shared \ --enable-include=shared \ --enable-log_config=shared \ --enable-mime=shared \ --enable-negotiation=shared \ --enable-proxy=shared \ --enable-proxy_http=shared \ --enable-rewrite=shared \ --enable-setenvif=shared \ --enable-ssl=shared \ --enable-unixd=shared \ --enable-ssl \ --with-ssl=/opt/openssl-1.0.1i \ --enable-ssl-staticlib-deps \ --enable-mods-static=ssl make (would run sudo make install next but I get an error) I'm essentially following the guide here except with newer slightly newer versions. My problem is I get a linker error when I run make for apache: ... Making all in support make[1]: Entering directory `/home/developer/downloads/httpd-2.4.10/support' make[2]: Entering directory `/home/developer/downloads/httpd-2.4.10/support' /usr/share/apr-1.0/build/libtool --silent --mode=link x86_64-linux-gnu-gcc -std=gnu99 -pthread -L/opt/openssl-1.0.1i/lib -lssl -lcrypto \ -o ab ab.lo /usr/lib/x86_64-linux-gnu/libaprutil-1.la /usr/lib/x86_64-linux-gnu/libapr-1.la -lm /usr/bin/ld: /opt/openssl-1.0.1i/lib/libcrypto.a(dso_dlfcn.o): undefined reference to symbol 'dlclose@@GLIBC_2.2.5' I tried the answer here, but no luck. I would prefer to just use aptitude, but unfortunately the versions I need aren't available yet. If anyone knows how to fix the linker problem (or what I think is a linker problem), or knows of a better way to tell apache to use a newer openssl, it would be greatly appreciated; I've got apache 1.0.1i working otherwise.

    Read the article

  • apache redirect based on referrer

    - by user40259
    I've just noticed that schoolmate.r09.railsrumble.com seems to be doing a rewrite to one of my domains, agilegrad.com. This is kind of annoying since that site is the top hit on google when searching for my domain and the address bar stays as schoolmate.r09.railsrumble.com when following the link. I'd like to redirect incoming requests with schoolmate.r09.railsrumble.com in the referrer variable to agilegrad.com, but the following in my vhost config for agilegrad.com isn't working for me: RewriteEngine on RewriteCond %{HTTP_REFERER} railsrumble RewriteRule ^/(.*) http://www.agilegrad.com/$1 [L,R=301] With this on, I see a bunch of redirects in my access log, but only for static files (cs, javascript). This is the first time I've tried using mod_rewrite, so I'm not even sure if I can accomplish what I want, which is to do a full redirect to my domain and have it appear to the user that they are at agilegrad.com.

    Read the article

  • Wireless and Wired Network Access at same time?

    - by grasper
    At work, I use a laptop. It is a Dell Latitude D630 with Windows XP. I work in a lab environment where I need to use the Ethernet Port as a Static IP to interact with a local network (which cannot talk to the outside world). What I would like to do is use the Wireless as the internet connection so i can check email, etc at the same time I am using the ethernet network... It seems like this is not possible. Is there a piece of software or a way to configure it to allow me to do this?

    Read the article

  • IPcop Multiple WAN Subnets

    - by obsidian
    We have an IPcop firewall and have had no issues with it. We've had a block of 10 IP addresses from our colocation provider and have been able port forward from those to internal servers as needed. We've recently needed additional IPs and the colocation provider issued an additional block of 10. The problem: The 10 new IP addresses issued are in a different subnet with a different gateway. The question: How do I add the new gateway into IPcop? How do I make it so that any outbound traffic in response to any inbound traffic from a new IP go back out through the new gateway? I attempted to add a static route via the console using the following command: route add -net x.x.x.x gw x.x.x.x netmask 255.255.255.192 I also added the new IPs as aliases and setup port forwarding as I've done with the existing IP block. However, when I attempt to access a web server from an external workstation, it just times out. Thanks in advance for your assistance.

    Read the article

  • Network issues with DNS not being found

    - by Anriëtte Combrink
    Hi there This is exactly like how our network looks like: Single server with a network router Everything is setup, but I cannot connect our Macs under the Login Options - Join... to this server. Our server's name is Toolbox and I have tried Toolbox.local, Toolbox.private, prepended the afp:// protocol to the name, but nothing, our Macs just don't want to connect this way. Our router has DHCP and gives out all the IP addresses naturally, would I have to add Toolbox.local to the DNS on the router and like it via static internal IP to the server? Our Macs keep giving the following error while trying to join the Network Account Server: Unable to add server Could not resolve the address (2200) What am I doing wrong?

    Read the article

< Previous Page | 377 378 379 380 381 382 383 384 385 386 387 388  | Next Page >