Search Results

Search found 9464 results on 379 pages for 'cron task'.

Page 288/379 | < Previous Page | 284 285 286 287 288 289 290 291 292 293 294 295  | Next Page >

  • Debugging .NET code called from X++ code in AX 2012

    - by ssmantha
    A very intriguing issue came to me to debug .Net code called from X++ code in AX 2012. This was indeed a challenge to be nailed down. Luckily the tools and some concepts helped me to achieve this task. Here it goes... We need to do a seamless debugging from AX debugger to Visual Studio back and forth. To enable this we need to first see if the dll to be debug is present in GAC then we might need to uninstall it from it due to the order of preference .NET loads the assemblies. The assemblies are first loaded from GAC and then the runtime checks for Public and Private Assemblies. Since the assembly in GAC is always compiled with runtime optimizations it is difficult to debug. We need to unhook this assembly from GAC and then move further relying on >NET assembly loading patterns. Step 1: Remove the target assembly to debug from GAC. Before that stop all the AOS servers and close all the instances of programs which rely on AOT e.g. all clients and even visual studio now. Step 2: Build your sample code which is present in AOT in debug mode and get the dll file along with PDB files. Step 3: Place these files in the Server\..\Bin and Client\bin directories of AX installation. Step 4: Configure Visual Studio: Step 4.1: Configure Debugging Options. In Visual Studio Go to Debug -> Options and Settings -> Debug node -> General sub node and disable “Enable Just My Code (managed)” Step 4.2: Specify the symbol loading directory options. Specify the locations for Client bin and server bin directories of the installation, remember to specify the correct instance of Server bin directory corresponding to your AOS. Step 4.3: Configure the project for debugging Step 5: Ready to go place your breakpoints in X++ and in .Net wherever necessary before this process... Run the Visual studio project and it will invoke the AX client with your breakpoint hitting X++ code.. and when you do a step-in using F11 the Visual studio debugger will be active and from here onwards you would be able to debug the complete flow. Debugging in seamless manner across debuggers is really very good feature and mostly underutilized, but by doing so we can have improved troubleshooting and saves a hell lot of time.. Stay tuned for more in Advanced Debugging..

    Read the article

  • Installing Ubuntu 12.04 on NUC intel i3 DC3217IYE

    - by Kieron
    System: NUC i3, 2 hdmi ports, ethernet, no wireless. UEFI boot 2x2gb ram 30gb mSATA internal drive Ubuntu 64bit 12.04.03 Hello i am having much trouble loading an OS on my NUC. I started out attempting another OS with various loaders (beast/hack) without much success,(various panics on boot or endless reboot loops, or graphics failures) after many tries i decided to attempt Ubuntu. Many years ago i loaded Ubuntu on an e-machine without an issue so i figured it would go smoothly, nothing could be further from the truth. The Live USB stick loads, but when i am installing the OS it always fails to load grub 2. Obviously it wont boot from SSD without grub2. I searched and found that Boot-repair should fix it...so i created a live usb with boot repair...it refuses to repair the grub because the install never finished and the needed partions are not fully created and flagged. It also demands an internet connection which i am unable to provide. I then ran gparted in an attempt to create the needed partitions manually, then re-ran boot repair turning off the "check internet" option. and disabling the re-install grub hoping it would create the missing directories and or fix the flags. it appeared to run successfully but upon return to the Ubuntu live USB it still fails at the grub2 install. also gparted doesnt have the same choices that Ubuntu install has when creating partitions, causing it to not recognize that i already had a root, or an EFI directory or it sometimes couldnt tell what the format of the partition was...all very annoying the reason i cant connect to internet (and cant upload the error logs) is the nuc only has Ethernet and the location i have to set up is too far away from modem. i can not move the monitor closer to modem as it is a 50inch LCD. I just want to do a basic install with one user acct and remote desktop (vnc) turned on so i can move the NUC to the modem connect via ethernet and then finish setting it up via Remote desktop/VNC chicken from my mac. While i await any assistance you maybe able to provide i am going to attempt to switch to the 32bit version and legacy boot to see if that can load grub. thnx again to anyone that can come up with a possible solution. i would love to hit "erase and install ubuntu" if anyone can figure out what is stopping that simple answer from working. Also disks (CD/DVD) are not an option as neither my Mac mini or my NUC have optical drives, and i have no desire to buy one for one task

    Read the article

  • Mexico leading in Business Transformation Strategies:

    - by [email protected]
    By John Burke Group Vice President Oracle Applications Business Unit     I recently completed a business tour in Mexico, and was surprised by both the economic vibrancy of the country and the thought leadership expressed by many of the customers I met.  An example of the economic vibrancy of the country: across the street from my hotel was the local Bentley dealership, Coach Store, Yves Saint Laurent and of course a Starbucks.  I only made it to Starbucks.  Both the Coach Store and YSL had a line of folks waiting to get in... As for thought leadership, there were several illustrations only on the first day. I had the opportunity to meet with a branch of the Mexican Federal Government. Their questions were not about clerical task automation, far from it! We discussed citizen on-line access to fees and services - for example looking up the duty on an international goods shipment, or tracking that my taxes have been received, or the status of my request for a certain service.  Eligibility, policies and status.  Having an integrated rules or policy automation system that would allow businesses and citizens to access accurate information and ensure the proper collection of fees and payment for 3rd party provided services.    Then in the afternoon, I met with the owner of a roofing company (note: most roofs in Mexico are flat and made of cement).  This CEO started discussing how he wanted to transform his business from a cement products company to a service company and market 5-10-15 year service contracts which would guarantee the structural integrity of the roof and of course that the roof would remain waterproof.  Although his products were guaranteed, they required an annual inspection and most home owners never schedule that inspection until it is too late and water damage has occurred.  These emergency calls reduce his margin and reduce customer satisfaction.  This lead to a discussion of business models in general and why long term differentiation can only come from service, not just for the music or news industries, but also for roofing companies!    I completely agreed with the transformational concepts described in both meetings and quickly understood why there is a Bentley dealership near my hotel.    

    Read the article

  • Configurable Objects - Introduction

    - by Anthony Shorten
    One of the interesting facilities in the framework is Configurable Object functionality (it is also known as Task Optimization and also known as Cool Tools). The idea is that any implementation can create their own views of the base product objects and services and implement functionality against those new views. For example, in Oracle Utilities Customer Care and Billing, there is a Person object. That object is used to store and manage information about individuals as well as companies. In the base product you would use the Person Maintenance screen and fill in some of the screen when you wanted to register or maintain and individual as well and fill out other parts of the screen when you wanted to register or maintain a company. This can be somewhat confusing to some customers. Using Configurable Objects this can be simplified. A business object can be created that is a view of the any object. For example, you could create a Human business object which would cover the aspects of the Person object pertaining to an individual and a Company business object to cover the aspects unique to a company. Even the tag names (i.e. Field Names) in the object can be changed to be more what the implementation is familiar with. The object can also restructure the object. For example, a common identifier for an individual in the USA is the Social Security number, this value is a Person Identifier (as this varies in each country). In the new Human object you can remap the Person Identifier as a Social Security number. To define a Business Object you use a schema editor built into the browser user interface and use a mapping language to setup the business objects. An example of the language is shown below in an extract of the schema for the Human business object. As you can see there are mapping as well as formatting and other tags. This information can be built manually or using a wizard which generates the base structure for you to alter. This is all stored as meta data when saved. Once a Business object is built it can be used as basis for code, other business objects (we support inheritance), called by a screen (called a UI Map) or even as a Web Service. This is just a start with Configurable Objects as you can also create views of base services called Business Services, Service Scripts used for non-object or complex object processing (as well as other things), UI Maps used for screens and Data Areas to reuse definitions across multiple objects. Configurable Objects are powerful and I only really touched on them here. Over the next few months I hope to add lots more entries about them.

    Read the article

  • SQL SERVER – Use ROLL UP Clause instead of COMPUTE BY

    - by pinaldave
    Note: This upgrade was test performed on development server with using bits of SQL Server 2012 RC0 (which was available at in public) when this test was performed. However, SQL Server RTM (GA on April 1) is expected to behave similarly. I recently observed an upgrade from SQL Server 2005 to SQL Server 2012 with compatibility keeping at SQL Server 2012 (110). After upgrading the system and testing the various modules of the application, we quickly observed that few of the reports were not working. They were throwing error. When looked at carefully I noticed that it was using COMPUTE BY clause, which is deprecated in SQL Server 2012. COMPUTE BY clause is replaced by ROLL UP clause in SQL Server 2012. However there is no direct replacement of the code, user have to re-write quite a few things when using ROLL UP instead of COMPUTE BY. The primary reason is that how each of them returns results. In original code COMPUTE BY was resulting lots of result set but ROLL UP. Here is the example of the similar code of ROLL UP and COMPUTE BY. I personally find the ROLL UP much easier than COMPUTE BY as it returns all the results in single resultset unlike the other one. Here is the quick code which I wrote to demonstrate the said behavior. CREATE TABLE tblPopulation ( Country VARCHAR(100), [State] VARCHAR(100), City VARCHAR(100), [Population (in Millions)] INT ) GO INSERT INTO tblPopulation VALUES('India', 'Delhi','East Delhi',9 ) INSERT INTO tblPopulation VALUES('India', 'Delhi','South Delhi',8 ) INSERT INTO tblPopulation VALUES('India', 'Delhi','North Delhi',5.5) INSERT INTO tblPopulation VALUES('India', 'Delhi','West Delhi',7.5) INSERT INTO tblPopulation VALUES('India', 'Karnataka','Bangalore',9.5) INSERT INTO tblPopulation VALUES('India', 'Karnataka','Belur',2.5) INSERT INTO tblPopulation VALUES('India', 'Karnataka','Manipal',1.5) INSERT INTO tblPopulation VALUES('India', 'Maharastra','Mumbai',30) INSERT INTO tblPopulation VALUES('India', 'Maharastra','Pune',20) INSERT INTO tblPopulation VALUES('India', 'Maharastra','Nagpur',11 ) INSERT INTO tblPopulation VALUES('India', 'Maharastra','Nashik',6.5) GO SELECT Country,[State],City, SUM ([Population (in Millions)]) AS [Population (in Millions)] FROM tblPopulation GROUP BY Country,[State],City WITH ROLLUP GO SELECT Country,[State],City, [Population (in Millions)] FROM tblPopulation ORDER BY Country,[State],City COMPUTE SUM([Population (in Millions)]) BY Country,[State]--,City GO After writing this blog post I continuously feel that there should be some better way to do the same task. Is there any easier way to replace COMPUTE BY? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • What changes were made to a document

    - by Daniel Moth
    Part of my job is writing functional specs. Due to the inevitable iterative and incremental nature of software design/development, these specs need to be updated with additions/deletions/changes over a period of time. When the time comes for a developer to implement features or update their design document (or a tester to test the feature or update their test specs) they need to be doing that against the latest spec. The problem is that if they have reviewed this document already, they need a quick way to find the delta from the last time they reviewed it to see what changes exist and how their existing plans may be affected (instead of having to read the entire document again). Doing that is very easy assuming your Word documents are hosted on SharePoint. 1. Every time you review a document note the SharePoint version and/or date (if it is a printed copy, make sure your printout includes the date in the footer – all my specs do) 2. When you need to see what changed, open the document (make sure you are not using a cached or local offline copy) and on the ribbon go to the "Review" tab and then  click on the "Compare" button. 3. Click on the "Specific Version…" option. In the dialog that pops up pick the last version you reviewed and click the "Compare" button. [TIP for authors: before checkin of your document, always compare against the "Last Version" on the SharePoint so you can add appropriate more complete check in comments] 4. What you see now is that in addition to the document you have open, two other documents just opened up. One is in the background (flashing on your task bar) – close that one as it is the old version. 5. The other document is in the foreground and contains all the changes between the old version and the latest one. Be sure not to make edits to this document, use it only for reading the changes. To find all the changes, on the ribbon under the "Review" tab, click on the "Reviewing Pane" to open the reviewing pane on the left. You can now click on each pink change to see what it is. 6. When you are done reviewing changes close the document and don't save any changes (remember if you want to make edits/additions/comments make them in the original document which is still open). And now I have a URL to point to people that keep asking about this – enjoy  :-) Comments about this post welcome at the original blog.

    Read the article

  • Adding Column to a SQL Server Table

    - by Dinesh Asanka
    Adding a column to a table is  common task for  DBAs. You can add a column to a table which is a nullable column or which has default values. But are these two operations are similar internally and which method is optimal? Let us start this with an example. I created a database and a table using following script: USE master Go --Drop Database if exists IF EXISTS (SELECT 1 FROM SYS.databases WHERE name = 'AddColumn') DROP DATABASE AddColumn --Create the database CREATE DATABASE AddColumn GO USE AddColumn GO --Drop the table if exists IF EXISTS ( SELECT 1 FROM sys.tables WHERE Name = 'ExistingTable') DROP TABLE ExistingTable GO --Create the table CREATE TABLE ExistingTable (ID BIGINT IDENTITY(1,1) PRIMARY KEY CLUSTERED, DateTime1 DATETIME DEFAULT GETDATE(), DateTime2 DATETIME DEFAULT GETDATE(), DateTime3 DATETIME DEFAULT GETDATE(), DateTime4 DATETIME DEFAULT GETDATE(), Gendar CHAR(1) DEFAULT 'M', STATUS1 CHAR(1) DEFAULT 'Y' ) GO -- Insert 100,000 records with defaults records INSERT INTO ExistingTable DEFAULT VALUES GO 100000 Before adding a Column Before adding a column let us look at some of the details of the database. DBCC IND (AddColumn,ExistingTable,1) By running the above query, you will see 637 pages for the created table. Adding a Column You can add a column to the table with following statement. ALTER TABLE ExistingTable Add NewColumn INT NULL Above will add a column with a null value for the existing records. Alternatively you could add a column with default values. ALTER TABLE ExistingTable Add NewColumn INT NOT NULL DEFAULT 1 The above statement will add a column with a 1 value to the existing records. In the below table I measured the performance difference between above two statements. Parameter Nullable Column Default Value CPU 31 702 Duration 129 ms 6653 ms Reads 38 116,397 Writes 6 1329 Row Count 0 100000 If you look at the RowCount parameter, you can clearly see the difference. Though column is added in the first case, none of the rows are affected while in the second case all the rows are updated. That is the reason, why it has taken more duration and CPU to add column with Default value. We can verify this by several methods. Number of Pages The number of data pages can be obtained by using DBCC IND command. Though, this an undocumented dbcc command, many experts are ok to use this command in production. However, since there is no official word from Microsoft, use this “at your own risk”. DBCC IND (AddColumn,ExistingTable,1) Before Adding the Columns 637 Adding a Column with NULL 637 Adding a column with DEFAULT value 1270 This clearly shows that pages are physically modified. Please note, a high value indicated in the Adding a column with DEFAULT value  column is also a result of page splits. Continues…

    Read the article

  • Rights and use of developed software

    - by Nils Munch
    I have been working on a piece of software for a company, that they wish to resell. There was an mail-based agreement upon a flat hourly rate for my work, and eager me chose to accept a rather low fee. Due to the stress and tempo of the task, a direct contract was never formed or signed. The software was developed locally on my machine, and I was pretty much alone with it, except by excellent help from StackOverflow when I got stuck. Now, the software is nearing completion, I suddenly hear that they have hired a new developer to make the same piece of software as me, and that I was expected to resign within long. Confused I ask around, and realize that the CEO of the company had informed the rest of the company that I was terminally ill and had cancer, and was expected to leave the company soon. Since I'm perfectly healthy, this confused me even more, until I realized what was going on. When I confronted my boss with this, I was no longer seen as a member of the company, and I left the same day, never to return. Later, I raised the question about my missing pay, since I had been working for quite a bit, and not received any payment for my software. I saw that they had already sold a fair copy of my software, and since it's not exactly sold cheap, the company should have plenty of gold to pay me. The company refused, and said that they owned the software, and everything it contained. That was a lot of drama, but my question is this: Who has the rights to the software ? The source code had my personal watermarks and copyrights inprinted, but they have since simply deleted it. The company claim that they have all the rights, because they have a website made about the product, where they write that they have "All rights reserved" in the bottom. My instinct tells me that if a company buys a service like this, and then refuses to pay their developer, then they should not be allowed to keep, and much less resell the product. I have not signed any agreements about giving the company the use of this product, I have made it in my own time and without help from the rest of the company. This all takes place in Denmark, Europe, but I would guess that the rules about this is somewhat universal. Im not the strongest person to legal-talk, so I might be wrong.

    Read the article

  • Five things SSIS should drop

    - by jamiet
    There’s a current SQL Server meme going round entitled Five things SQL Server should drop and, whilst no-one tagged me to write anything, I couldn’t resist doing the same for SQL Server Integration Services. So, without further ado, here are five things that I think should be dropped from SSIS.Data source connectionsSeriously, does anyone use these? I know why they’re there. Someone sat in a meeting back in the early part of the last decade and said “Ooo, Reporting Services and Analysis Services have these things called Data Sources. If we used them in Integration Services then we’d have a really cool integration story.” Errr….no.Web Service TaskDitto. If you want to do anything useful against anything but the simplest of SOAP web services steer well clear of this peculiar SSIS additionActiveX Script TaskAnother task that I suspect has never seen the light of day in a SSIS package. It was billed as a way of running upgraded DTS2000 ActiveX scripts in SSIS – sounds good except for one thing. Anytime one of those scripts would try to talk to the DTS object model (which they all do – otherwise what’s the point) then they will error out. This one has always been a real head scratcher.Slow Changing Dimension wizardI suspect I may get some push back on this one but I’m mentioning it anyway. Some people like the SCD wizard; I am not one of those people! Everything that the SCD component does can easily be reproduced using other components and from a performance point of view its much more beneficial to use those alternatives.Multifile Connection ManagerImagining buying a house that came with a set of keys that didn’t open any of the doors. Sounds ridiculous right? How about a SSIS Connection Manager that doesn’t get used by any of the tasks or components. Ah, that’ll be the Multifile Connection Manager then!Comments are of course welcome. Diatribes are assumed :)@Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Is throwing an error in unpredictable subclass-specific circumstances a violation of LSP?

    - by Motti Strom
    Say, I wanted to create a Java List<String> (see spec) implementation that uses a complex subsystem, such as a database or file system, for its store so that it becomes a simple persistent collection rather than an basic in-memory one. (We're limiting it specifically to a List of Strings for the purposes of discussion, but it could extended to automatically de-/serialise any object, with some help. We can also provide persistent Sets, Maps and so on in this way too.) So here's a skeleton implementation: class DbBackedList implements List<String> { private DbBackedList() {} /** Returns a list, possibly non-empty */ public static getList() { return new DbBackedList(); } public String get(int index) { return Db.getTable().getRow(i).asString(); // may throw DbExceptions! } // add(String), add(int, String), etc. ... } My problem lies with the fact that the underlying DB API may encounter connection errors that are not specified in the List interface that it should throw. My problem is whether this violates Liskov's Substitution Principle (LSP). Bob Martin actually gives an example of a PersistentSet in his paper on LSP that violates LSP. The difference is that his newly-specified Exception there is determined by the inserted value and so is strengthening the precondition. In my case the connection/read error is unpredictable and due to external factors and so is not technically a new precondition, merely an error of circumstance, perhaps like OutOfMemoryError which can occur even when unspecified. In normal circumstances, the new Error/Exception might never be thrown. (The caller could catch if it is aware of the possibility, just as a memory-restricted Java program might specifically catch OOME.) Is this therefore a valid argument for throwing an extra error and can I still claim to be a valid java.util.List (or pick your SDK/language/collection in general) and not in violation of LSP? If this does indeed violate LSP and thus not practically usable, I have provided two less-palatable alternative solutions as answers that you can comment on, see below. Footnote: Use Cases In the simplest case, the goal is to provide a familiar interface for cases when (say) a database is just being used as a persistent list, and allow regular List operations such as search, subList and iteration. Another, more adventurous, use-case is as a slot-in replacement for libraries that work with basic Lists, e.g if we have a third-party task queue that usually works with a plain List: new TaskWorkQueue(new ArrayList<String>()).start() which is susceptible to losing all it's queue in event of a crash, if we just replace this with: new TaskWorkQueue(new DbBackedList()).start() we get a instant persistence and the ability to share the tasks amongst more than one machine. In either case, we could either handle connection/read exceptions that are thrown, perhaps retrying the connection/read first, or allow them to throw and crash the program (e.g. if we can't change the TaskWorkQueue code).

    Read the article

  • Is there a better term than "smoothness" or "granularity" to describe this language feature?

    - by Chris
    One of the best things about programming is the abundance of different languages. There are general purpose languages like C++ and Java, as well as little languages like XSLT and AWK. When comparing languages, people often use things like speed, power, expressiveness, and portability as the important distinguishing features. There is one characteristic of languages I consider to be important that, so far, I haven't heard [or been able to come up with] a good term for: how well a language scales from writing tiny programs to writing huge programs. Some languages make it easy and painless to write programs that only require a few lines of code, e.g. task automation. But those languages often don't have enough power to solve large problems, e.g. GUI programming. Conversely, languages that are powerful enough for big problems often require far too much overhead for small problems. This characteristic is important because problems that look small at first frequently grow in scope in unexpected ways. If a programmer chooses a language appropriate only for small tasks, scope changes can require rewriting code from scratch in a new language. And if the programmer chooses a language with lots of overhead and friction to solve a problem that stays small, it will be harder for other people to use and understand than necessary. Rewriting code that works fine is the single most wasteful thing a programmer can do with their time, but using a bazooka to kill a mosquito instead of a flyswatter isn't good either. Here are some of the ways this characteristic presents itself. Can be used interactively - there is some environment where programmers can enter commands one by one Requires no more than one file - neither project files nor makefiles are required for running in batch mode Can easily split code across multiple files - files can refeence each other, or there is some support for modules Has good support for data structures - supports structures like arrays, lists, and especially classes Supports a wide variety of features - features like networking, serialization, XML, and database connectivity are supported by standard libraries Here's my take on how C#, Python, and shell scripting measure up. Python scores highest. Feature C# Python shell scripting --------------- --------- --------- --------------- Interactive poor strong strong One file poor strong strong Multiple files strong strong moderate Data structures strong strong poor Features strong strong strong Is there a term that captures this idea? If not, what term should I use? Here are some candidates. Scalability - already used to decribe language performance, so it's not a good idea to overload it in the context of language syntax Granularity - expresses the idea of being good just for big tasks versus being good for big and small tasks, but doesn't express anything about data structures Smoothness - expresses the idea of low friction, but doesn't express anything about strength of data structures or features Note: Some of these properties are more correctly described as belonging to a compiler or IDE than the language itself. Please consider these tools collectively as the language environment. My question is about how easy or difficult languages are to use, which depends on the environment as well as the language.

    Read the article

  • ArchBeat Link-o-Rama for 2012-09-19

    - by Bob Rhubart
    BPM Process Accelerator Packs – Update | Pat Shepherd Architect Pat Shepherd shares several resources relevant to the new Oracle Process Accelerators for Oracle Business Process Management. Oracle BI EE Management Pack Now Available for Oracle Enterprise Manager 12cR2 | Mark Rittman A handy and informative overview from Oracle ACE Director Mark Rittman. WebSockets on WebLogic Server | Steve Button "As there's no standard WebSocket Java API at this point, we've chosen to model the API on the Grizzly WebSocket API with some minor changes where necessary," says James "Buttso" Buttons. "Once the results of JSR-356 (Java API for WebSocket) becomes public, we'll look to implement and support that." Oracle Reference Architecture: Software Engineering This document from the IT Strategies from Oracle library focuses on integrated asset management and the need for efffective asset metadata management to insure that assets are properly tracked and reused in a manner that provides a holistic functional view of the enterprise. The tipping point for cloud management is nigh | Cloud Computing - InfoWorld "Businesses typically don't think too much about managing IT resources until they become too numerous and cumbersome to deal with on an ad hoc basis—a point many companies will soon hit in their adoption of cloud computing." — David Linthicum DevOps Basics: Track Down High CPU Thread with ps, top and the new JDK7 jcmd Tool | Frank Munz "The approach is very generic and works for WebLogic, Glassfish or any other Java application," say Frank Munz. "UNIX commands in the example are run on CentOS, so they will work without changes for Oracle Enterprise Linux or RedHat. Creating the thread dump at the end of the video is done with the jcmd tool from JDK7." Frank has captured the process in the posted video. OIM 11g R2 UI customization | Daniel Gralewski "OIM user interface customizations are easier now, and they survive patch applications--there is no need to reapply them after patching," says Fusion Middleware A-Team member Daniel Gralewski. "Adding new artifacts, new skins, and plugging code directly into the user interface components became an easier task." Daniel shows just how easy in this post. Thought for the Day "I have yet to see any problem, however complicated, which, when looked at in the right way, did not become still more complicated." — Poul Anderson (November 25, 1926 – July 31, 2001) Source: SoftwareQuotes.com

    Read the article

  • Play Your Position Until the Play Breaks Down&hellip;then Do Whatever it Takes.

    - by AjarnMark
    If I didn’t know better, I would think that K. Brian Kelley (blog | twitter) has been listening in on conversations with my boss. In his recent blog post Successful Teams: Knowing When to Step Out of Your Role, Brian describes quite clearly a philosophy that my boss has been trying to get across to everyone in the department.  We have been using sports analogies, like how important it is to play your position, until the play breaks down (such as a fumble) and then do whatever it takes it to cover each other / recover the ball / win.  While we like having very skilled people who could do a lot of different tasks, it is important that you first do your assigned tasks, and only once those are complete, or failure of the larger mission is probable, do you consider walking away from them to help someone else with their responsibilities. The thing that you cannot afford, especially on a lean team, is the really nice guy who is always trying to help out other people, but in doing so, is never quite getting his own responsibilities taken care of.  Yes, if the Running Back drops the football, you want any member of the team in the vicinity to jump on it, whether that is the leading blocker or the Quarterback.  But until the fumble happens, you want the leading blocker to focus on doing his job, and block for the Running Back.  If the blocker is doing any other job than his primary responsibility, you’re probably going to lose. This sounds logical enough, but it is really easy to go astray with the best of intentions.  This is especially true on a small, tight-knit team, where it is really easy to get sucked into someone else’s task or problem, doubly so if you think you can do it better or faster than them.  Now you are really setting yourself up for failure.  The right thing is to let the other person do the job, even if it seems less efficient in the short-run, so that you can focus on the tasks which require your expertise.  Don’t break formation…don’t abandon your assignment, until it is clear that mission failure is imminent, and even then, as Brian writes, it should be with the agreement of the mission leader. Thanks, Brian, for putting it so well.  This has been distributed throughout our department.

    Read the article

  • Installation issue after 4 attempts

    - by SixTen
    Have successfully installed Ubuntu 12.10 on 2 laptops, one running Vista & one running Windows8. Made 4 attempts (long downloads of WUBI.EXE) to different HD's & still NO GO. The machine is an older machine with Windows2000Professional installed & running. The system has 3 hard drives; C:(20.5 Gb with 7.26Gb Free), D: (74.5 Gb with 33.1 Gb Free), & E: (35.3 Gb with 24.8 Gb Free) which all have Gigabytes space available; also an A: 3 1/2floppy drive and a CD-drive burner. The CPU processor is older but seems sufficient: AMD Athlon XP 1700+ and the task manager of Windows2000 shows the processor works fine.. The flat-screen display works fine. Here is the error message I receive each time the 'installation configuration' is verified: "No root file system is defined" "Please correct this from partitioning menu" << The Ubuntu operating system is allowing me a couple of options at the very top right menu. I was able to establish a wireless connection but the MAIN homepage won't load with FIREFOX app or any other apps. I cannot access or even find any 'Partitioning Menu" from the displayed page. I cannot access files or Windows Explorer to view drives since I'm not using the Windows O/S. If I try to go back & re-install the UBUNTU 12.10 again, it always asks me to UNINSTALL the one found on the HD & then I run WUBI.EXE again which takes a long time for the download. Do I need to go back into Windows2000 & use Windows Explorer to look at the file structure & add a partition? On previous attempts I have tried loading the WUBI.EXE on all 3 HD's C: D: & E: Sure is frustrating?? Thanks for any suggestions. NEW UBUNTU user & what I've seen so far I like.. (J.R.)

    Read the article

  • How to: Show wait cursor in managed and native code

    - by TechTwaddle
    Someone on the MSDN forum asked about how to show a wait cursor, like when your application is loading or performing some (background) task. It’s pretty simple to show the wait cursor in both managed and native code, and in this post we will see just how. Managed Code (C#) Set Cursor.Current to Cursors.WaitCursor, and call Cursor.Show(). And to come back to normal cursor, set Cursor.Current to Cursors.Default and call Show() again. Below is a button handler for a sample app that I made, (watch the video below) private void button1_Click(object sender, EventArgs e) {     lblProgress.Text = "Downloading ether...";     lblProgress.Update();     Cursor.Current = Cursors.WaitCursor;     Cursor.Show();     //do some processing     for (int i = 0; i < 50; i++)     {         progressBar1.Value = 2 * (i + 1);         Thread.Sleep(100);     }     Cursor.Current = Cursors.Default;     Cursor.Show();     lblProgress.Text = "Download complete.";     lblProgress.Update(); }   Native Code In native code, call SetCursor(LoadCursor(NULL, IDC_WAIT)); to show the wait cursor; and SetCursor(LoadCursor(NULL, IDC_ARROW)); to come back to normal. The same button handler for native version of the app is below, case IDC_BUTTON_DOWNLOAD:     {         HWND temp;         temp = GetDlgItem(hDlg, IDC_STATIC_PROGRESS);         SetWindowText(temp, L"Downloading ether...");         UpdateWindow(temp);         SetCursor(LoadCursor(NULL, IDC_WAIT));         temp = GetDlgItem(hDlg, IDC_PROGRESSBAR);         for (int i=0; i<50; i++)         {             SendMessage(temp, PBM_SETPOS, (i+1)*2, 0);             Sleep(100);         }         SetCursor(LoadCursor(NULL, IDC_ARROW));         temp = GetDlgItem(hDlg, IDC_STATIC_PROGRESS);         SetWindowText(temp, L"Download Complete.");         UpdateWindow(temp);     }     break; Here is a video of the sample app running. First the managed version is deployed and the native version next,

    Read the article

  • SQL – What ACID stands in the Database? – Contest to Win 24 Amazon Gift Cards and Joes 2 Pros 2012 Kit

    - by Pinal Dave
    We love puzzles. One of the brain’s main task is to solve puzzles. Sometime puzzles are very complicated (e.g Solving Rubik Cube or Sodoku)  and sometimes the puzzles are very simple (multiplying 4 by 8 or finding the shortest route while driving). It is always to solve puzzle and it creates an experience which humans are not able to forget easily. The best puzzles are the one where one has to do multiple things to reach to the final goal. Let us do something similar today. We will have a contest where you can participate and win something interesting. Contest This contest have two parts. Question 1: What ACID stands in the Database? This question seems very easy but here is the twist. Your answer should explain minimum one of the properties of the ACID in detail. If you wish you can explain all the four properties of the ACID but to qualify you need to explain minimum of the one properties. Question 2: What is the size of the installation file of NuoDB for any specific platform. You can answer this question following format – NuoDB installation file is of size __ MB for ___ Platform. Click on the Download the Link and download your installation file for NuoDB. You can post figure out the file size from the properties of the file. We have exciting content prizes for the winners. Prizes 1) 24 Amazon Gift Cards of USD 10 for next 24 hours. One card at every hour. (Open anywhere in the world) 2) One grand winner will get Joes 2 Pros SQL Server 2012 Training Kit worth USD 249. (Open where Amazon ship books). Amazon | 1 | 2 | 3 | 4 | 5  Rules The contest will be open till July 21, 2013. All the valid comments will be hidden till the result is announced. The winners will be announced on July 24, 2013. Hint: Download NuoDB  Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Move Data into the grid for scalable, predictable response times

    - by JuergenKress
    CloudTran is pleased to introduce the availability of the CloudTran Transaction and Persistence Manager for creating scalable, reliable data services on the Oracle Coherence In-Memory Data Grid (IMDG). Use of IMDG architectures has been key to handling today’s web-scale loads because it eliminates database latency by storing important and frequently access data in memory instead of on disk. The CloudTran product lets developers easily use an IMDG for full ACID-compliant transactions without having to be concerned about the location or spread of data. The system has its own implementation of fast, scalable distributed transactions that does NOT depend on XA protocols but still guarantees all ACID properties. Plus, CloudTran asynchronously replicates data going into the IMDG to back-end datastores and back-up data centers, again ensuring ACID properties. CloudTran can be accessed through Java Persistence API (JPA via TopLink Grid) and now, through a new Low-Level API, or LLAPI. This is ideal for use in SOA applications that need data reliability, high availability, performance, and scalability. It is still in its limited beta release, the LLAPI gives developers the ability to use standard put/remove logic available in Coherence and then wrap logic with simple Spring annotations or XML+AspectJ to start transactions. An important feature of LLAPI is the ability to join transactions. This is a common outcome for SOA applications that need to reduce network traffic by aggregating data into single cache entries and then doing SOA service processing in the node holding the data. This results in the need to orchestrate transaction processing across multiple service calls. CloudTran has the capability to handle these “multi-client” transactions at speed with no loss in ACID properties. Developing software around an IMDG like Oracle Coherence is an important choice for today’s web-scale applications and services. But this introduces new architectural considerations to maintain scalability in light of increased network loads and data movement. Without using CloudTran, developers are faced with an incredibly difficult task to ensure data reliability, availability, performance, and scalability when working with an IMDG. Working with highly distributed data that is entirely volatile while stored in memory presents numerous edge cases where failures can result in data loss. The CloudTran product takes care of all of this, leaving developers with the confidence and peace of mind that all data is processed correctly. For those interested in evaluating the CloudTran product and IMDGs, take a look at this link for more information: http://www.CloudTran.com/downloadAPI.ph , or send your questions to [email protected]. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: CloudTran,data grid,M,SOA Community,Oracle SOA,Oracle BPM,BPM,Community,OPN,Jürgen Kress

    Read the article

  • When to use SOAP over REST

    So, how does REST based services differ from SOAP based services, and when should you use SOAP? Representational State Transfer (REST) implements the standard HTTP/HTTPS as an interface allowing clients to obtain access to resources based on requested URIs. An example of a URI may look like this http://mydomain.com/service/method?parameter=var1&parameter=var2. It is important to note that REST based services are stateless because http/https is natively stateless. One of the many benefits for implementing HTTP/HTTPS as an interface is can be found in caching. Caching can be done on a web service much like caching is done on requested web pages. Caching allows for reduced web server processing and increased response times because content is already processed and stored for immediate access. Typical actions performed by REST based services include generic CRUD (Create, Read, Update, and Delete) operations and operations that do not require state. Simple Object Access Protocol (SOAP) on the other hand uses a generic interface in order to transport messages. Unlike REST, SOAP can use HTTP/HTTPS, SMTP, JMS, or any other standard transport protocols. Furthermore, SOAP utilizes XML in the following ways: Define a message Defines how a message is to be processed Defines the encoding of a message Lays out procedure calls and responses As REST aligns more with a Resource View, SOAP aligns more with a Method View in that business logic is exposed as methods typically through SOAP web service because they can retain state. In addition, SOAP requests are not cached therefore every request will be processed by the server. As stated before Soap does retain state and this gives it a special advantage over REST for services that need to preform transactions where multiple calls to a service are need in order to complete a task. Additionally, SOAP is more ideal for enterprise level services that implement standard exchange formats in the form of contracts due to the fact that REST does not currently support this. A real world example of where SOAP is preferred over REST can be seen in the banking industry where money is transferred from one account to another. SOAP would allow a bank to perform a transaction on an account and if the transaction failed, SOAP would automatically retry the transaction ensuring that the request was completed. Unfortunately, with REST, failed service calls must be handled manually by the requesting application. References: Francia, S. (2010). SOAP vs. REST. Retrieved 11 20, 2011, from spf13: http://spf13.com/post/soap-vs-rest Rozlog, M. (2010). REST and SOAP: When Should I Use Each (or Both)? Retrieved 11 20, 2011, from Infoq.com: http://www.infoq.com/articles/rest-soap-when-to-use-each

    Read the article

  • Is my work on a developer test being taken advantage of?

    - by CodeWarrior
    I am looking for a job and have applied to a number of positions. One of them responded, I had a pretty lengthy phone interview (perhaps an hour +) and they then set me up with a developer test. I was told that this test is estimated to take between 6 and 8 hours and that, provided it met with their approval, I could be paid for my work on it. That gave me some pause, but I endeavored. The developer test took place on a VM accessed via RDP. The task was to implement a search page in a web project that requests data from the server, displays it on the screen in a table, has a pretty complicated search filtering scheme (there are about 15 statuses and when sending the search to the server you can search by these statuses) in addition to the string/field search. They want some SVG icons to change color on certain data values, they want some data to be represented differently than how it is in the database, etc. Loooong story short, this took one heck of a lot longer than 6-8 hours. Much of it was due to the very poor VM that I was running on (Visual Studio 2013 took 10 minutes to load, and another 15 minutes to open the 3 GB ginormous solution). After completing, I was told to commit my changes to source control... Hmm, OK. I get an email back that they thought that the SVGs could have their color changed differently, they found a bug in this edge-case, there was an occasional problem with this other thing that I never experienced, etc. So I am 13-14 hours into this thing now, and I have to do bug fixes. I do them, and they come back with some more. This is all apparently going into a production application. I noticed some anomalies in the code that was already in there where it looked like other people had coded all of one functionality and not anything else that I could find. Am I just being used for cheap labor? Even if they pay me the promised 50 dollars and hour for 6 hours, I have committed like 18 hours to this thing now. If I bug fix all of the stuff they keep coming up with, I will have worked at least 16 hours for free. I have taken a number of developer tests. I have never taken one where I worked on code that was destined for production. I have never taken one where I implemented a feature that was in the pipeline for development (it was planned for, and I implemented it through the course of the test). And I have never taken one that took 4 rounds and a total of 20+ hours. I get the impression that they are using their developer test to field some of the functionality, that they don't have time for in their normal team, on the cheap. Also, I wouldn't mind a 'devtest' tag.

    Read the article

  • Enhance Your Gmail Account in Chrome

    - by Asian Angel
    Are you tired of items like the Chat and Invite Boxes cluttering up your Gmail account? Then join us as we look at the Better Gmail extension for Google Chrome. Before Here are some examples of items that you may be tired of looking at in your Gmail account such as the “Footer” below your “Inbox”, the “Chat Box”, and the “Invitation Box”. Perhaps you would also like to have the “New Window, Print all, & Create a document Commands” moved elsewhere. And of course there is everyone’s “favorite” sponsored links… Time to do some cleaning up and reorganizing. Better Gmail in Action As soon as you have installed Better Gmail a new tab will automatically open and present you with the available options. Place a “checkmark” in the box for each option that you would like activated and click on “Save” when finished. Note: The final option entry is a tie-in with two other “linked” extensions (Folders4Gmail & HTML Signature) while the middle listing is a link to an article for disabling Google Buzz. Once you have saved your changes in the “Options” you will be prompted to refresh your Gmail tab to see the changes. Going back to our “Inbox Area” everything looks so much more streamlined and clean now. Goodbye clutter! The “New Window, Print all, & Create a document Commands” definitely look a lot nicer as a small toolbar above our e-mail. And the right side…you can see for yourself just how much better that looks. No more distractions there to bother you as you read your e-mail. Conclusion If you have been wanting to get rid of the undesirable elements visible in your Gmail account then hurry over to the Better Gmail page, grab the extension and enjoy the better view. Links Download the Better Gmail extension (Google Chrome Extensions) Similar Articles Productive Geek Tips Figure out which Online accounts are selling your email to spammersAdd a Remember The Milk Task Pane to Gmail in ChromeHow to Send and Receive Hotmail from Your Gmail AccountAdd Your Gmail To Windows Live MailOpen Your Gmail Account in a Popup Window TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Windows Media Player 12: Tweak Video & Sound with Playback Enhancements Own a cell phone, or does a cell phone own you? Make your Joomla & Drupal Sites Mobile with OSMOBI Integrate Twitter and Delicious and Make Life Easier Design Your Web Pages Using the Golden Ratio Worldwide Growth of the Internet

    Read the article

  • Ubuntu 10.10 forgets desktop theme.

    - by Marcelo Cantos
    (I posed this question on superuser.com and haven't received any answers or comments, then I came across this site, so my apologies to anyone who has seen this already.) I am running Ubuntu in VirtualBox (on a Windows 7 host). Several times now, the top-level menu bar, the task bar — and seemingly every system dialog — have forgotten the out-of-the-box "Ambiance" theme they conform to when I first installed the system. Window captions still preserve the theme, but pretty much nothing else does. I have searched high and low on Google for assistance with this problem. Everything I've found suggests either running some gconf reset or deleting .gconf* .gnome* and other similar directories. I have followed all this advice and nothing works. I still get a boring Windows-95-style gray 3D look and feel. On previous occasions, after much messing around I've given up and rebooted the VM instance, and been pleasantly suprised to see the original "Ambience" theme restored throughout the UI, but invariably it disappears again some time later, usually after a reboot, so I can never figure out what I did that broke it. Here's a sample from Ubuntu's site of what I want it to look like. And here's a screenshot of my system as it currently looks. Also note that my GNOME Terminals normally have a nice purple semi-translucent look, and as can be seen from the screenshot, they are now just a solid matt white. This last time (just yesterday), trying numerous combinations all the usual tricks and rebooting several times hasn't fixed it, so here I am on SU wondering: How do I recover the out-of-the-box theme for my Gnome/Ubuntu desktop, noting that blowing away all config files — as suggested in many places online — fails to achieve this? It might help to know that it seems to fail either after I resize the VM instance, forcing the Ubuntu desktop to resize itself, or after I play around with Compiz settings. I haven't been able to figure out which of these it is, and it could be neither. Given the amount of pain I have had to go through to get things back to normal (and given that I am at a loss as to how to do so), it has proven difficult to definitively isolate the cause.

    Read the article

  • ArchBeat Link-o-Rama for 2012-07-11

    - by Bob Rhubart
    Is the future of retail showrooming? | GigaOm "The digital shopper isn’t just digital and she expects to be served seamlessly across all channels, physical and digital," reports GigaOm. Twenty years into the Internet era and the changes just keep coming. Solution architects take note... Agile Bureaucracy: When Practices become Principles | Jim Highsmith.com "Principles and values are a critical part of keeping individuals in organizations aligned and engaged," says Agile guru Jim Highsmith, "but the more pseudo-principles are piled on top of principles, the less and less organizations are able to adapt." Oracle Fusion Applications 11g Basics | Michel Schildmeijer "We are trying to build up a Oracle Fusion Apps environment on a Exalogic system, though still on bare metal, because officially there still is no Oracle VM available yet on Exalogic," says Michel Schildmeijer, an Oracle Fusion Middleware Architect at Qualogy. "It is a bit of a challenge, but getting to know the basics and which components the install, build and configure phase use, might bring you a step further on the way." Process Centric Banking: Loan Origination Solution | Manish Palaparthy This interesting, detailed post by Manish Palaparthy explains the process behind the execution of a proof-of-concept for a Fusion Middleware-based loan-origination solution for a bank. The solution incorporates Oracle BPM Suite, Webcenter, and ADF technolgies in a SOA infrastructure. How eBay and Facebook are Cleaning Up Data Centers | Amy Gallo - HBR The Cloud has needs! As reported by Amy Gallo in an article in the Harvard Business Review, "The electricity demand of data centers and the telecommunications network is rivaling that of most nations. If the cloud were itself a country, it would rank fifth in the world on energy demand behind the U.S., China, Russia, and Japan." Do WebLogic configuration from ANT | Edwin Biemond "With WebLogic WLST you can script the creation of all your Application DataSources or SOA Integration artifacts( like JMS etc)," says Oracle ACE Edwin Biemond. "This is necessary if your domain contains many WebLogic artifacts or you have more then one WebLogic environment. If so, you want to script this so you can configure a new WebLogic domain in minutes and you can repeat this task with always the same result." Oracle Special-Edition E-Book: Cloud Architecture for Dummies Learn how to architect and model your cloud implementation to drive efficiency and leverage economies of scale with Cloud Architecture for Dummies, a free Oracle e-book. (Registration required.) Thought for the Day "One of the best things to come out of the home computer revolution could be the general and widespread understanding of how severely limited logic really is." — Frank Herbert Source: SoftwareQuotes.com

    Read the article

  • Data Structures usage and motivational aspects

    - by Aubergine
    For long student life I was always wondering why there are so many of them yet there seems to be lack of usage at all in many of them. The opinion didn't really change when I got a job. We have brilliant books on what they are and their complexities, but I never encounter resources which would actually give a good hint of practical usage. I perfectly understand that I have to look at problem , analyse required operations, look for data structure that does them efficiently. However in practice I never do that, not because of human laziness syndrome, but because when it comes to work I acknowledge time priority over self-development. Over time I thought that when I would be better developer I will automatically use more of them - that didn't happen at all or maybe I just didn't. Then I found that the colleagues usually in the same plate as me - knowing more or less some three of data structures and being totally happy about it and refusing to discuss this matter further with me, coming back to conversations about 'cool new languages' 'libraries that do jobs for you' and the joy to work under scrumban etc. I am stuck with ArrayLists, Arrays and SortedMap , which no matter what I do always suffice or either I tweak them to be capable of fulfilling my task. Yes, it might be inefficient but do we really have to care if Intel increases performance over years no matter if we improve our skills? Does new Xeon or IBM machines really care what we use? What if I like build things, but I am not particularly excited whether it is n log(n) or just n? Over twenty years the processing power increased enormously, which gives us freedom of not being critical about which one to use? On top of that new more optimized languages appear which support multiple cores more efficiently. To be more specific: I would like to find motivational material on complex real areas/cases of possible effective usages of data structures. I would be really grateful if you would provide relevant resources. There is similar question ,but in the end the links again mostly describe or do dumb example(vehicles, students or holy grail quest - yes, very relevant) them and people keep referring to the "scenario decides the data structure to use". I want to know these complex scenarios to be able to identify similarities to my scenario and then use them. The complex scenarios where it really matters and not necessarily of quantitive nature. It seems that data structures only concern is efficiency and nothing else? There seems to be no particular convenience for developer in use one over another. (only when I found scientific resources on why exactly simple carbohydrates are evil I stopped eating sugar and candies completely replacing it with less harmful fruits - I hope you can see the analogy)

    Read the article

  • Unable to remove limit on memory usage for PHP script.

    - by Jess Telford
    The Situation I am having an issue with a PHP script getting the following error message: Fatal error: Out of memory (allocated 359923712) (tried to allocate 72 bytes) in /path/to/piwik/core/DataTable.php on line 969 The script I'm running is: /path/to/piwik/misc/cron/archive.sh I am assuming the numbers are Bytes, which means that total is approximately 360MB. For all intents and purposes, I have increased the memory limits on the server well above 360MB, yet this is the number (give or take a byte) it consistently errors out at. Please note: This question is not about fixing a memory leak in the script, nor about why the script itself is using so much memory. The script is part of the Piwik archiving process, so I cannot just fix any memory leaks, etc. For more info on this script and why I am increasing the memory limit, see "How to setup auto archiving" The question Given that the script is attempting to use over 360MB of memory, which I cannot change, why does it not seem possible for me to increase the amount of memory available to php on my server? What I've tried Increasing PHP's memory_limit Given the php.ini file: php -i | grep php.ini Configuration File (php.ini) Path => /usr/local/lib Loaded Configuration File => /usr/local/lib/php.ini I have edited that file, so the memory_limit directive reads; memory_limit = -1 Restart Apache, and check the new value has stuck; $ php -i | grep memory_limit memory_limit => -1 => -1 Run the script, and get the same error. I've also tried 1G, 768M, etc, all to the same result (ie; no change). Update 22nd June: Based on Vangel's help, I have attempted to set post_max_size to 20M in combination with setting memory_limit. Again, this has no effect. Removing the memory limit on child processes of Apache I have found and edited the httpd.conf file to make sure there is no RLimitMEM directive. I then used WHM's Apache Configuration Memory Usage Restrictions to generate a restriction, which it claimed was at 1000M (and confirmed by checking httpd.conf). Both of these resulted in no change to the script erroring at 360MB. Increasing the per process memory limits of Linux The current limits set on the system: $ ulimit -m 524288 $ ulimit -v 524288 I have attempted to set both of these to unlimited: $ ulimit -m unlimited $ ulimit -v unlimited $ ulimit -m unlimited $ ulimit -v unlimited Once again, this has resulted in absolutely no improvement in my problem. My setup $ cat /etc/redhat-release CentOS release 5.5 (Final) $ uname -a Linux example.com 2.6.18-164.15.1.el5 #1 SMP Wed Mar 17 11:30:06 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux $ php -i | grep "PHP Version" PHP Version => 5.2.9 $ httpd -V Server version: Apache/2.0.63 Server built: Feb 2 2011 01:25:12 Cpanel::Easy::Apache v3.2.0 rev5291 Server's Module Magic Number: 20020903:13 Server loaded: APR 0.9.17, APR-UTIL 0.9.15 Compiled using: APR 0.9.17, APR-UTIL 0.9.15 Architecture: 64-bit Server compiled with.... -D APACHE_MPM_DIR="server/mpm/prefork" -D APR_HAS_SENDFILE -D APR_HAS_MMAP -D APR_HAVE_IPV6 (IPv4-mapped addresses enabled) -D APR_USE_SYSVSEM_SERIALIZE -D APR_USE_PTHREAD_SERIALIZE -D SINGLE_LISTEN_UNSERIALIZED_ACCEPT -D APR_HAS_OTHER_CHILD -D AP_HAVE_RELIABLE_PIPED_LOGS -D HTTPD_ROOT="/usr/local/apache" -D SUEXEC_BIN="/usr/local/apache/bin/suexec" -D DEFAULT_PIDLOG="logs/httpd.pid" -D DEFAULT_SCOREBOARD="logs/apache_runtime_status" -D DEFAULT_LOCKFILE="logs/accept.lock" -D DEFAULT_ERRORLOG="logs/error_log" -D AP_TYPES_CONFIG_FILE="conf/mime.types" -D SERVER_CONFIG_FILE="conf/httpd.conf" Output of $ php -i: http://pastebin.com/EiRut6Nm

    Read the article

  • What are the arguments against parsing the Cthulhu way?

    - by smarmy53
    I have been assigned the task of implementing a Domain Specific Language for a tool that may become quite important for the company. The language is simple but not trivial, it already allows nested loops, string concatenation, etc. and it is practically sure that other constructs will be added as the project advances. I know by experience that writing a lexer/parser by hand -unless the grammar is trivial- is a time consuming and error prone process. So I was left with two options: a parser generator à la yacc or a combinator library like Parsec. The former was good as well but I picked the latter for various reasons, and implemented the solution in a functional language. The result is pretty spectacular to my eyes, the code is very concise, elegant and readable/fluent. I concede it may look a bit weird if you never programmed in anything other than java/c#, but then this would be true of anything not written in java/c#. At some point however, I've been literally attacked by a co-worker. After a quick glance at my screen he declared that the code is uncomprehensible and that I should not reinvent parsing but just use a stack and String.Split like everybody does. He made a lot of noise, and I could not convince him, partially because I've been taken by surprise and had no clear explanation, partially because his opinion was immutable (no pun intended). I even offered to explain him the language, but to no avail. I'm positive the discussion is going to re-surface in front of management, so I'm preparing some solid arguments. These are the first few reasons that come to my mind to avoid a String.Split-based solution: you need lot of ifs to handle special cases and things quickly spiral out of control lots of hardcoded array indexes makes maintenance painful extremely difficult to handle things like a function call as a method argument (ex. add( (add a, b), c) very difficult to provide meaningful error messages in case of syntax errors (very likely to happen) I'm all for simplicity, clarity and avoiding unnecessary smart-cryptic stuff, but I also believe it's a mistake to dumb down every part of the codebase so that even a burger flipper can understand it. It's the same argument I hear for not using interfaces, not adopting separation of concerns, copying-pasting code around, etc. A minimum of technical competence and willingness to learn is required to work on a software project after all. (I won't use this argument as it will probably sound offensive, and starting a war is not going to help anybody) What are your favorite arguments against parsing the Cthulhu way?* *of course if you can convince me he's right I'll be perfectly happy as well

    Read the article

< Previous Page | 284 285 286 287 288 289 290 291 292 293 294 295  | Next Page >