Search Results

Search found 26059 results on 1043 pages for 'spring test'.

Page 336/1043 | < Previous Page | 332 333 334 335 336 337 338 339 340 341 342 343  | Next Page >

  • Check_webinject plugin will not connect to https site using

    - by uSlackr
    We're using Nagios to monitor some of our web sites. We have a script that uses the older plugin that we are trying to switch to using webinject.pl from cpan. When the script runs, it generates this error: LWP::Protocol::https::Socket: SSL connect attempt failed with unknown error error:1407741A:SSL routines:SSL23_GET_SERVER_HELLO:tlsv1 alert decode error at /usr/local/share/perl5/LWP/Protocol/http.pm line 51. It appears the web site does not support TLSv1 for https. If it matters, the site is a Cisco WebVPN. I've pointed the same script at a different site that does support TLSv1 and it seems to work fine. My web search is coming up empty. Successful connect: <case id="1" description1="Metro Home Page" description2="Metro, login test" method="get" url="https://metro.myco.com/index.php" verifypositive="restricted" logrequest="yes" logresponse="yes" sleep="1" / Failing connect: <case id="2" description1="WebVPN Home Page" description2="webvpn.myco.com login test" method="get" url="https://webvpn.myco.com/webvpn.html" verifypositive="Authorized" logrequest="yes" logresponse="yes" sleep="1" /

    Read the article

  • Tab Sweep - More OSGi, Coherence, Oracle Java moves, JMS 2.0 and more

    - by alexismp
    Recent Tips and News on Java, Java EE 6, GlassFish & more : • Why I will use Java EE (JEE, and not J2EE) instead of Spring in new Enterprise Java Projects (Kai) • What is Happening vs. What is Interesting (Geertjan) • Oracle Coherence & Oracle Service Bus: REST API Integration (Nino) • Oracle's Top 10 Java Moves of 2011 (eWeek) • JEP 122: Remove the Permanent Generation (OpenJDK.org) • JEE6 – Glassfish 3.1, Clustering & Failover (Xebia.fr) • Testing LAZY mechanism in EJB 3 (e-blog-java) • Discoing with Vorpal (Chuk) • Devoxx : les évolutions de JMS 2.0 (Ippon.fr) • More OSGi... (Jarda) • Practical Migration to Java 7 - Small Codeexamples (FOSSLC) • Coherence Part III : Filtres (Zenika.com)

    Read the article

  • Circular dependency and object creation when attempting DDD

    - by Matthew
    I have a domain where an Organization has People. Organization Entity public class Organization { private readonly List<Person> _people = new List<Person>(); public Person CreatePerson(string name) { var person = new Person(organization, name); _people.Add(person); return person; } public IEnumerable<Person> People { get { return _people; } } } Person Entity public class Person { public Person(Organization organization, string name) { if (organization == null) { throw new ArgumentNullException("organization"); } Organization = organization; Name = name; } public Organization { get; private set; } public Name { get; private set; } } The rule for this relationship is that a Person must belong to exactly one Organization. The invariants I want to guarantee are: A person must have an organization this is enforced via the Person's constuctor An organization must know of its people this is why the Organization has a CreatePerson method A person must belong to only one organization this is why the organization's people list is not publicly mutable (ignoring the casting to List, maybe ToEnumerable can enforce that, not too concerned about it though) What I want out of this is that if a person is created, that the organization knows about its creation. However, the problem with the model currently is that you are able to create a person without ever adding it to the organizations collection. Here's a failing unit-test to describe my problem [Test] public void AnOrganizationMustKnowOfItsPeople() { var organization = new Organization(); var person = new Person(organization, "Steve McQueen"); CollectionAssert.Contains(organization.People, person); } What is the most idiomatic way to enforce the invariants and the circular relationship?

    Read the article

  • New Replication, Optimizer and High Availability features in MySQL 5.6.5!

    - by Rob Young
    As the Product Manager for the MySQL database it is always great to announce when the MySQL Engineering team delivers another great product release.  As a field DBA and developer it is even better when that release contains improvements and innovation that I know will help those currently using MySQL for apps that range from modest intranet sites to the most highly trafficked web sites on the web.  That said, it is my pleasure to take my hat off to MySQL Engineering for today's release of the MySQL 5.6.5 Development Milestone Release ("DMR"). The new highlighted features in MySQL 5.6.5 are discussed here: New Self-Healing Replication ClustersThe 5.6.5 DMR improves MySQL Replication by adding Global Transaction Ids and automated utilities for self-healing Replication clusters.  Prior to 5.6.5 this has been somewhat of a pain point for MySQL users with most developing custom solutions or looking to costly, complex third-party solutions for these capabilities.  With 5.6.5 these shackles are all but removed by a solution that is included with the GPL version of the database and supporting GPL tools.  You can learn all about the details of the great, problem solving Replication features in MySQL 5.6 in Mat Keep's Developer Zone article.  New Replication Administration and Failover UtilitiesAs mentioned above, the new Replication features, Global Transaction Ids specifically, are now supported by a set of automated GPL utilities that leverage the new GTIDs to provide administration and manual or auto failover to the most up to date slave (that is the default, but user configurable if needed) in the event of a master failure. The new utilities, along with links to Engineering related blogs, are discussed in detail in the DevZone Article noted above. Better Query Optimization and ThroughputThe MySQL Optimizer team continues to amaze with the latest round of improvements in 5.6.5. Along with much refactoring of the legacy code base, the Optimizer team has improved complex query optimization and throughput by adding these functional improvements: Subquery Optimizations - Subqueries are now included in the Optimizer path for runtime optimization.  Better throughput of nested queries enables application developers to simplify and consolidate multiple queries and result sets into a single unit or work. Optimizer now uses CURRENT_TIMESTAMP as default for DATETIME columns - For simplification, this eliminates the need for application developers to assign this value when a column of this type is blank by default. Optimizations for Range based queries - Optimizer now uses ready statistics vs Index based scans for queries with multiple range values. Optimizations for queries using filesort and ORDER BY.  Optimization criteria/decision on execution method is done now at optimization vs parsing stage. Print EXPLAIN in JSON format for hierarchical readability and Enterprise tool consumption. You can learn the details about these new features as well all of the Optimizer based improvements in MySQL 5.6 by following the Optimizer team blog. You can download and try the MySQL 5.6.5 DMR here. (look under "Development Releases")  Please let us know what you think!  The new HA utilities for Replication Administration and Failover are available as part of the MySQL Workbench Community Edition, which you can download here .Also New in MySQL LabsAs has become our tradition when announcing DMRs we also like to provide "Early Access" development features to the MySQL Community via the MySQL Labs.  Today is no exception as we are also releasing the following to Labs for you to download, try and let us know your thoughts on where we need to improve:InnoDB Online OperationsMySQL 5.6 now provides Online ADD Index, FK Drop and Online Column RENAME.  These operations are non-blocking and will continue to evolve in future DMRs.  You can learn the grainy details by following John Russell's blog.InnoDB data access via Memcached API ("NotOnlySQL") - Improved refresh of an earlier feature releaseSimilar to Cluster 7.2, MySQL 5.6 provides direct NotOnlySQL access to InnoDB data via the familiar Memcached API. This provides the ultimate in flexibility for developers who need fast, simple key/value access and complex query support commingled within their applications.Improved Transactional Performance, ScaleThe InnoDB Engineering team has once again under promised and over delivered in the area of improved performance and scale.  These improvements are also included in the aggregated Spring 2012 labs release:InnoDB CPU cache performance improvements for modern, multi-core/CPU systems show great promise with internal tests showing:    2x throughput improvement for read only activity 6x throughput improvement for SELECT range Read/Write benchmarks are in progress More details on the above are available here. You can download all of the above in an aggregated "InnoDB 2012 Spring Labs Release" binary from the MySQL Labs. You can also learn more about these improvements and about related fixes to mysys mutex and hash sort by checking out the InnoDB team blog.MySQL 5.6.5 is another installment in what we believe will be the best release of the MySQL database ever.  It also serves as a shining example of how the MySQL Engineering team at Oracle leads in MySQL innovation.You can get the overall Oracle message on the MySQL 5.6.5 DMR and Early Access labs features here. As always, thanks for your continued support of MySQL, the #1 open source database on the planet!

    Read the article

  • Instant database snapshot

    - by raj
    My product uses oracle 9 database in its backend. every week the new release of the product is launched which will want to fire some DML, DDL queries to the database. I usually test the product release in a dummy database before applying it in the main database. I create a database dump using exp command, then import them into dummy database using imp. then i test the product in the dummy database and checks if there are any errors. This exp and imp takes about 3 hours to complete. Is there any alternative as : instant snapshot of the live database (which will be independent of the live one)? or is there any option to keep dummydatabase in sync with the originl database always. Yhis can be done by making the product firing DML&DDL queries to both the databases.. but this will be a HUGE performance problem.. how can i overcome this?

    Read the article

  • How to directly send *.php.html files to browser without passing throught PHP? (Apache)

    - by Cédric Girard
    Hi, on my Apache/PHP server, file test.php.html is parsed by PHP, while test.html is not. PHPDOC create a lot of *.php.html files, with a XML header wich is a pain for PHP parser, but how to tell Apache not to pass *.php.html file to PHP and just send back the file to browser? My php.conf file <IfModule prefork.c> LoadModule php5_module modules/libphp5.so </IfModule> <IfModule worker.c> LoadModule php5_module modules/libphp5-zts.so </IfModule> AddHandler php5-script .php AddType text/html .php What can I do? Thanks, Bests regards Cédric

    Read the article

  • maildir in Windows for IMAP

    - by User1
    I'm interested in accessing my IMAP accounts offline. I found that maildirs are a simple way to make it work. I found that [offlineimap] takes care of almost everything in making the IMAP-maildir sync happen. Then, I can open the account in Mutt or Wanderlust client. One major problem, maildirs use colons in their filenames. Windows doesn't allow colons. I tried mount -f -s -b -o managed "d:/tmp/mail" "/home/of/mail" in Cygwin, but doing an echo test > /home/of/mail/test:file didn't work I'm thinking about ext2fs, but I need an ext2 partition somewhere. Can I make a file into a partition somehow? I don't want to start modifying my hard drive's partition table. Besides, does anyone know if ext2fs will support colons in filenames?

    Read the article

  • ubuntu 12.04 server and tftp access violation issue on put command

    - by SMYERS
    I installed tftp as per this document: http://icesquare.com/wordpress/solvedtftp-error-code-2-access-violation/ I followed this to the letter 3 times and every time I put a file I get: root@CiscoCFG:~# tftp localhost tftp put test Error code 2: Access violation tftp root@CiscoCFG:~# tftp localhost tftp put test Error code 2: Access violation If I touch the file name chmod 777 the file then do a put it works perfectly fine. My config is as follows: service tftp { protocol = udp port = 69 socket_type = dgram wait = yes user = nobody server = /usr/sbin/in.tftpd server_args = -s /svr/tftp disable = no } the directory /svr/tftp permissions are 777: drwxrwxrwx 3 nobody nobody 4096 Nov 14 10:32 svr This thing should have full permissions as would anyone who wanted to write or read from that directory. I see nothing in the logs im really stumped on this. If the file is already in the directory I can read it all day long, I just cant make NEW files, can not put them, but I can do get's, I can only put to an existing file with permissions @777. Thanks

    Read the article

  • Apache w/out internet connection

    - by robert knobulous
    I have a Vista laptop that I have been running Apache / MySql / Php / PhpMyAdmin on for quite some time without fail. I just use it to test bits of code here and there etc. No problems, until recently when I needed to test something and I happened to be in a place that I could not get an internet connection. Why am I unable to access localhost from the same machine without an internet connection? I am type http://localhost..etc into the browser's address bar and I get the message that I am unable to access without an internet connection. I checked my windows/system32/etc/hosts file and the first two lines are 127.0.0.1 localhost ::1 localhost What am I missing here?

    Read the article

  • Correct For Loop Design

    - by Yttrill
    What is the correct design for a for loop? Felix currently uses if len a > 0 do for var i in 0 upto len a - 1 do println a.[i]; done done which is inclusive of the upper bound. This is necessary to support the full range of values of a typical integer type. However the for loop shown does not support zero length arrays, hence the special test, nor will the subtraction of 1 work convincingly if the length of the array is equal to the number of integers. (I say convincingly because it may be that 0 - 1 = maxval: this is true in C for unsigned int, but are you sure it is true for unsigned char without thinking carefully about integral promotions?) The actual implementation of the for loop by my compiler does correctly handle 0 but this requires two tests to implement the loop: continue: if not (i <= bound) goto break body if i == bound goto break ++i goto continue break: Throw in the hand coded zero check in the array example and three tests are needed. If the loop were exclusive it would handle zero properly, avoiding the special test, but there'd be no way to express the upper bound of an array with maximum size. Note the C way of doing this: for(i=0; predicate(i); increment(i)) has the same problem. The predicate is tested after the increment, but the terminating increment is not universally valid! There is a general argument that a simple exclusive loop is enough: promote the index to a large type to prevent overflow, and assume no one will ever loop to the maximum value of this type.. but I'm not entirely convinced: if you promoted to C's size_t and looped from the second largest value to the largest you'd get an infinite loop!

    Read the article

  • Unit testing ASP.NET Web API controllers that rely on the UrlHelper

    - by cibrax
    UrlHelper is the class you can use in ASP.NET Web API to automatically infer links from the routing table without hardcoding anything. For example, the following code uses the helper to infer the location url for a new resource,public HttpResponseMessage Post(User model) { var response = Request.CreateResponse(HttpStatusCode.Created, user); var link = Url.Link("DefaultApi", new { id = id, controller = "Users" }); response.Headers.Location = new Uri(link); return response; } That code uses a previously defined route “DefaultApi”, which you might configure in the HttpConfiguration object (This is the route generated by default when you create a new Web API project). The problem with UrlHelper is that it requires from some initialization code before you can invoking it from a unit test (for testing the Post method in this example). If you don’t initialize the HttpConfiguration and Request instances associated to the controller from the unit test, it will fail miserably. After digging into the ASP.NET Web API source code a little bit, I could figure out what the requirements for using the UrlHelper are. It relies on the routing table configuration, and a few properties you need to add to the HttpRequestMessage. The following code illustrates what’s needed,var controller = new UserController(); controller.Configuration = new HttpConfiguration(); var route = controller.Configuration.Routes.MapHttpRoute( name: "DefaultApi", routeTemplate: "api/{controller}/{id}", defaults: new { id = RouteParameter.Optional } ); var routeData = new HttpRouteData(route, new HttpRouteValueDictionary { { "id", "1" }, { "controller", "Users" } } ); controller.Request = new HttpRequestMessage(HttpMethod.Post, "http://localhost:9091/"); controller.Request.Properties.Add(HttpPropertyKeys.HttpConfigurationKey, controller.Configuration); controller.Request.Properties.Add(HttpPropertyKeys.HttpRouteDataKey, routeData);  The HttpRouteData instance should be initialized with the route values you will use in the controller method (“id” and “controller” in this example). Once you have correctly setup all those properties, you shouldn’t have any problem to use the UrlHelper. There is no need to mock anything else. Enjoy!!.

    Read the article

  • Am I wrong to disagree with A Gentle Introduction to symfony's template best practices?

    - by AndrewKS
    I am currently learning symfony and going through the book A Gentle Introduction to symfony and came across this section in "Chapter 4: The Basics of Page Creation" on creating templates (or views): "If you need to execute some PHP code in the template, you should avoid using the usual PHP syntax, as shown in Listing 4-4. Instead, write your templates using the PHP alternative syntax, as shown in Listing 4-5, to keep the code understandable for non-PHP programmers." Listing 4-4 - The Usual PHP Syntax, Good for Actions, But Bad for Templates <p>Hello, world!</p> <?php if ($test) { echo "<p>".time()."</p>"; } ?> (The ironic thing about this is that echo statement would look even better if time was variable declared in the controller, because then you could just embed the variable in the string instead of concatenating) Listing 4-5 - The Alternative PHP Syntax, Good for Templates <p>Hello, world!</p> <?php if ($test): ?> <p><?php echo time(); ?> </p><?php endif; ?> I fail to see how listing 4-5 makes the code "understandable for non-PHP programmers", and its readability is shaky at best. 4-4 looks much more readable to me. Are there any programmers who are using symfony that write their templates like those in 4-4 rather than 4-5? Are there reasons I should use one over the other? There is the very slim chance that somewhere down the road someone less technical could be editing it the template, but how does 4-5 actually make it more understandable to them?

    Read the article

  • Partner Webcast – Weblogic for Developers - 12 July 2012

    - by Thanos
    Oracle Weblogic Server is the industry’s leading application server for deploying Java EE applications with support for new features for lowering cost of operations, improving performance and enhancing scalability. But it’s also a great choice for the Java developers because of the differentiating capabilities that facilitate integration with other tools and frameworks, promote reusability and rapid redeployment of your applications. During the webinar we’re going to explore these differentiation features in more detail. Agenda: Java EE standards support in different Weblogic Server versions Weblogic  Classloading How Weblogic load the classes Filtering classloader Shared libraries Classloader Analisys Tool Spring support in Weblogic Weblogic integration with Apache Maven Advanced deployment features Fast Swap Side by Side deployment Q&A session Delivery Format This FREE online LIVE eSeminar will be delivered over the Web. Registrations received less than 24hours prior to start time may not receive confirmation to attend. Duration: 1 hour Register Now! For any questions please contact us at [email protected] Visit regularly our ISV Migration Center blog Or Follow us @oracleimc to learn more on Oracle Technologies as well as upcoming partner webcasts and events.

    Read the article

  • How do I set a default host for nginx?

    - by ulf
    I'm trying to figure out how to set a default host for my nginx installation. I found this article in the nginx Wiki: http://wiki.nginx.org/NginxVirtualHostExample#A_Default_Catchall_Virtual_Host Unfortunately, this doesn’t work. After restarting I get this: Restarting nginx: nginx: [emerg] unknown directive "http" in /etc/nginx/sites-enabled/catchall:1 nginx: configuration file /etc/nginx/nginx.conf test failed After removing the http directive I get this: Restarting nginx: nginx: [emerg] unknown log format "main" in /etc/nginx/sites-enabled/catchall:7 nginx: configuration file /etc/nginx/nginx.conf test failed I’m on Ubuntu 10.04.3 where I’m using the official nginx PPA. Version 1.0.9 of nginx is running.

    Read the article

  • Computer POST's and draws the BIOS screens very slowly - Motherboard issue?

    - by Ssvarc
    I have a desktop that refuses to boot into Windows. I used Hitachi DFT and the HD came back OK. I then used Memtest86+ and it took hours for the test to run. After 8+ hours it was up to test number 6. I aborted and ran Memtest86. It ran at basically the same speed. I aborted and went to look at the BIOS settings. The computer is running slow at POST. It takes a long time for the keyboard to be recognized, etc. The BIOS settings takes time to be (slowly) drawn on the screen. What could be causing such behavior? EDIT: I gave back the computer a while back without ever discovering the cause so I'm closing the question.

    Read the article

  • Benchmark an SSD

    - by Taylor Huston
    What is the best way to test/benchmark an SSD to make sure it's doing it's job. I invested in an SSD, have my OS on it, want to make sure I am getting my money's worth. I have heard some people making claims along the lines of: "I've had my SSD for X months and my read/write speeds have dropped Y%'. What is the best way to test for things like that (and what are good numbers to look for)? For reference I have a Samsung 830 128gb.

    Read the article

  • Which Browser is the Best to Use When Running Your Laptop on Battery Power?

    - by Asian Angel
    Squeezing the maximum amount of usage time out of your laptop battery can be challenging at times…it all depends on the software you are using. One software we are all likely to be using is a browser to keep up with our online lives… If your laptop is older, then getting the most out of your laptop’s aging battery is definitely a must. The good folks over at the 7 Tutorials blog have done a comparison test to see which browser is the gentlest on your laptop’s battery and the results may surprise you. You can view the results by visiting the link below… Had better (or worse) luck with one of the browsers tested? Then make sure to share the results with your fellow readers in the comments! Test Comparison: Which Browser Will Make Your Laptop’s Battery Last Longer? [7 Tutorials] How to Own Your Own Website (Even If You Can’t Build One) Pt 3 How to Sync Your Media Across Your Entire House with XBMC How to Own Your Own Website (Even If You Can’t Build One) Pt 2

    Read the article

  • why i failed to build rtorrent?

    - by hugemeow
    i am not root, so i have to build rtorrent from source and hope to install it in my home directory, but failed, why? [mirror@hugemeow rtorrent]$ ls AUTHORS autogen.sh ChangeLog configure.ac COPYING doc INSTALL Makefile.am NEWS rak README scripts src test [mirror@hugemeow rtorrent]$ ./autogen.sh aclocal... aclocal:configure.ac:7: warning: macro `AM_PATH_CPPUNIT' not found in library autoheader... libtoolize... using libtoolize automake... configure.ac: installing `./install-sh' configure.ac: installing `./missing' src/Makefile.am: installing `./depcomp' autoconf... configure.ac:7: error: possibly undefined macro: AM_PATH_CPPUNIT If this token and others are legitimate, please use m4_pattern_allow. See the Autoconf documentation. though autoge failed, configure script is created. [mirror@hugemeow rtorrent]$ ls aclocal.m4 autogen.sh ChangeLog config.h.in configure COPYING doc install-sh Makefile.am missing rak scripts test AUTHORS autom4te.cache config.guess config.sub configure.ac depcomp INSTALL ltmain.sh Makefile.in NEWS README src run configure, and fail for syntax error near unexpected token `1.9.6', what's wrong? what i should do in order to build this rtorrent for my centos? [mirror@hugemeow rtorrent]$ ./configure checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for gawk... gawk checking whether make sets $(MAKE)... yes ./configure: line 2016: syntax error near unexpected token `1.9.6' ./configure: line 2016: `AM_PATH_CPPUNIT(1.9.6)' [mirror@hugemeow rtorrent]$ git branch * master [mirror@hugemeow rtorrent]$ git branch -a * master remotes/origin/HEAD -> origin/master remotes/origin/c++11 remotes/origin/master [mirror@hugemeow rtorrent]$ git tag 0.9.0 0.9.1 [mirror@hugemeow rtorrent]$ git clean -dfx Removing Makefile.in Removing aclocal.m4 Removing autom4te.cache/ Removing config.guess Removing config.h.in Removing config.log Removing config.sub Removing configure Removing depcomp Removing doc/Makefile.in Removing install-sh Removing ltmain.sh Removing missing Removing src/Makefile.in Removing src/core/Makefile.in Removing src/display/Makefile.in Removing src/input/Makefile.in Removing src/rpc/Makefile.in Removing src/ui/Makefile.in Removing src/utils/Makefile.in Removing test/Makefile.in Edit 1: details about libtool and libtools [mirror@hugemeow rtorrent]$ libtoolize --version libtoolize (GNU libtool) 1.5.22 Copyright (C) 2005 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. [mirror@hugemeow rtorrent]$ libtool --version ltmain.sh (GNU libtool) 1.5.22 (1.1220.2.365 2005/12/18 22:14:06) Copyright (C) 2005 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

    Read the article

  • User for MSSQL 2008 Service Accounts

    - by Campo
    I want to create a Domain User that runs the SQL Service Accounts. The reason for this is that I have setup mirroring and MS recommends having the same user (a domain user account) running the services across all the the computers in the configuration to ensure mirroring will work properly. Right now in the test environment I just had them run under my user for simplicity. But now that I know what I am doing I would like to test the configuration more accurately. I am also aware that it makes things much simpler if this user is an administrator. My question is. Should I just create a simple user SQLSERVICEUSER and make it an administrator? Seems a little insecure to me. Anyone have a more elegant solution?

    Read the article

  • Logging library for (c++) games

    - by Klaim
    I know a lot of logging libraries but didn't test a lot of them. (GoogleLog, Pantheios, the coming boost::log library...) In games, especially in remote multiplayer and multithreaded games, logging is vital to debugging, even if you remove all logs in the end. Let's say I'm making a PC game (not console) that needs logs (multiplayer and multithreaded and/or multiprocess) and I have good reasons for looking for a library for logging (like, I don't have time or I'm not confident in my ability to write one correctly for my case). Assuming that I need : performance ease of use (allow streaming or formating or something like that) reliable (don't leak or crash!) cross-platform (at least Windows, MacOSX, Linux/Ubuntu) Wich logging library would you recommand? Currently, I think that boost::log is the most flexible one (you can even log to remotely!), but have not good performance update: is for high performance, but isn't released yet. Pantheios is often cited but I don't have comparison points on performance and usage. I've used my own lib for a long time but I know it don't manage multithreading so it's a big problem, even if it's fast enough. Google Log seems interesting, I just need to test it but if you already have compared those libs and more, your advice might be of good use. Games are often performance demanding while complex to debug so it would be good to know logging libraries that, in our specific case, have clear advantages.

    Read the article

  • Tab Sweep: Logging, WebSocket, NoSQL, Vaadin, RESTful, Task Scheduling, Environment Entries, ...

    - by arungupta
    Recent Tips and News on Java, Java EE 6, GlassFish & more : • Detailed Logging Output with GlassFish Server, Hibernate, and Log4j (wikis.oracle.com) • Serving Static Content on WebLogic and GlassFish (Colm Divilly) • Java EE and communication between applications (Martin Crosnier) • What are the new features in Java EE 6? (jguru) • Standardizing JPA for NoSQL: are we there yet? (Emmanuel) • Create an Asynchronous JAX-WS Web Service and call it from Oracle BPEL 11g (Bob) • Programmatic Login to Vaadin application with JAAS utilizing JavaEE6 features and Spring injection (vaadin) • Is in an EJB injected EntityManager thread-safe? (Adam Bien) • Websocket using Glassfish (demj33) • Designing and Testing RESTful web services [ UML, REST CLIENT ] (Mamadou Lamine Ba) • Glassfish hosting -Revion.com Glassfish Oracle hosting (revion.com) • Task Scheduling in Java EE 6 on GlassFish using the Timer Service (Micha Kops) • JEE 6 Environmental Enterprise Entries and Glassfish (Slim Ouertani) • Top 10 Causes of Java EE Enterprise Performance Problems (Pierre - Hugues Charbonneau)

    Read the article

  • Duck checker in Python: does one exist?

    - by elliot42
    Python uses duck-typing, rather than static type checking. But many of the same concerns ultimately apply: does an object have the desired methods and attributes? Do those attributes have valid, in-range values? Whether you're writing constraints in code, or writing test cases, or validating user input, or just debugging, inevitably somewhere you'll need to verify that an object is still in a proper state--that it still "looks like a duck" and "quacks like a duck." In statically typed languages you can simply declare "int x", and anytime you create or mutate x, it will always be a valid int. It seems feasible to decorate a Python object to ensure that it is valid under certain constraints, and that every time that object is mutated it is still valid under those constraints. Ideally there would be a simple declarative syntax to express "hasattr length and length is non-negative" (not in those words. Not unlike Rails validators, but less human-language and more programming-language). You could think of this as ad-hoc interface/type system, or you could think of it as an ever-present object-level unit test. Does such a library exist to declare and validate constraint/duck-checking on Python-objects? Is this an unreasonable tool to want? :) (Thanks!) Contrived example: rectangle = {'length': 5, 'width': 10} # We live in a fictional universe where multiplication is super expensive. # Therefore any time we multiply, we need to cache the results. def area(rect): if 'area' in rect: return rect['area'] rect['area'] = rect['length'] * rect['width'] return rect['area'] print area(rectangle) rectangle['length'] = 15 print area(rectangle) # compare expected vs. actual output! # imagine the same thing with object attributes rather than dictionary keys.

    Read the article

  • Symlink - Permission Denied

    - by John Smith
    I'm facing an interesting problem with plenty of Permission Denied outputs when using SymLinks Linux: Slackware 13.1 Directory with Symlink: root@Tower:/var/lib# ls -lah drwxr-xr-x 8 root root 0 2012-12-02 20:09 ./ drwxr-xr-x 15 root root 0 2012-12-01 21:06 ../ lrwxrwxrwx 1 ntop ntop 21 2012-12-02 20:09 ntop - /mnt/user/media/ntop6/ Symlinked Directory: root@Tower:/mnt/user/media# ls -lah drwxrwx--- 1 nobody users 1.4K 2012-12-02 19:28 ./ drwxrwx--- 1 nobody users 128 2012-11-18 16:06 ../ drwxrwxrwx 1 ntop ntop 320 2012-12-02 20:22 ntop6/ What I have done: I have used chown -h ntop:ntop on the ntop directory in /var/lib Just to be sure, I have chmod 777 to both directories Permission denied actions: root@Tower:/var/lib# sudo -u ntop mkdir /var/lib/ntop/test mkdir: cannot create directory `/var/lib/ntop/test': Permission denied Any ideas?

    Read the article

  • Make Network Manager use bridge for PPPoE instead of only working on ethernet?

    - by Azendale
    My ISP uses PPPoE on their DSL connections. I use Network Manager to connect to this using a bridged modem connected to eth0. Often, I want to test networking things, so a set myself up a KVM machine with a tap interface. I can then connect these interfaces to to virtual 'switches' by adding them to bridges. (I work for my ISP). Sometimes, I want to test cases where the PPPoE is connected more than once. For this, I would like to be able to add eth0 to my 'switch' (a bridge) so the VMs can have a 'bridged modem' connection to the internet. But I would like to still be able to run the PPPoE for my computer at the same time. Which means that I need to get network-manager to run PPPoE over the bridge (or eth0). The problem is that it considers eth0 (and the bridge) 'not managed' by network manager, so it refuses to use it. So, how can I have network manager dial PPPoE over a bridge?

    Read the article

  • My New Job

    - by Stuart Brierley
    Last year I started a new job with a logistics company in the North of England, where I was responsible for the management, design and development of IT Integration strategies, architectures and solutions using BizTalk Server 2009.  This included the design and implementation of the BizTalk Server 2009 infrastructure, the definition of development standards, mentoring a fellow developer in the ways of BizTalk and migrating a number of existing solutions from Softshare over to BizTalk 2009. Unfortunately I then realised that, following this initial set up, there didn't actually seem to be that much BizTalk work for me to get stuck into and reluctantly I have now moved on from this role to a very similar role with the country's largest office supplies company.  Based in Sheffield, we distribute office supplies on a UK wide basis and computer supplies across Europe. The situation here is slightly different than when I first joined my previous employer.  Whereas that was a green field installation with no previous BizTalk solutions in place, my new employer currently has a number of live BizTalk 2000 (!) and BizTalk 2006 solutions in place.  Unfortunately the infrastructure around these is less than ideal; with no clear distinction between development and test environments and no source control what so ever! We are currently building a proposal for a new BizTalk Server 2010 implementation, where I am hopeful of being able to implement fully independent development, test and pseudo-live environments, alongside an enterprise level live installation.  We should also be introducing Team Foundation Server to the development process, thereby giving us some much needed source control capabilities. Following this is likely to be a period of migration for the existing BizTalk Solutions, along with the onward development of new projects and initiatives - I'm hoping to be a busy man for the forseeable future :o)

    Read the article

< Previous Page | 332 333 334 335 336 337 338 339 340 341 342 343  | Next Page >