Search Results

Search found 4240 results on 170 pages for 'thoughts'.

Page 4/170 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • ARTS Reference Model for Retail

    - by Sanjeev Sharma
    Consider a hypothetical scenario where you have been tasked to set up retail operations for a electronic goods or daily consumables or a luxury brand etc. It is very likely you will be faced with the following questions: What are the essential business capabilities that you must have in place?  What are the essential business activities under-pinning each of the business capabilities, identified in Step 1? What are the set of steps that you need to perform to execute each of the business activities, identified in Step 2? Answers to the above will drive your investments in software and hardware to enable the core retail operations. More importantly, the choices you make in responding to the above questions will several implications in the short-run and in the long-run. In the short-term, you will incur the time and cost of defining your technology requirements, procuring the software/hardware components and getting them up and running. In the long-term, as you grow in operations organically or through M&A, partnerships and franchiser business models  you will invariably need to make more technology investments to manage the greater complexity (scale and scope) of business operations.  "As new software applications, such as time & attendance, labor scheduling, and POS transactions, just to mention a few, are introduced into the store environment, it takes a disproportionate amount of time and effort to integrate them with existing store applications. These integration projects can add up to 50 percent to the time needed to implement a new software application and contribute significantly to the cost of the overall project, particularly if a systems integrator is called in. This has been the reality that all retailers have had to live with over the last two decades. The effect of the environment has not only been to increase costs, but also to limit retailers' ability to implement change and the speed with which they can do so." (excerpt taken from here) Now, one would think a lot of retailers would have already gone through the pain of finding answers to these questions, so why re-invent the wheel? Precisely so, a major effort began almost 17 years ago in the retail industry to make it less expensive and less difficult to deploy new technology in stores and at the retail enterprise level. This effort is called the Association for Retail Technology Standards (ARTS). Without standards such as those defined by ARTS, you would very likely end up experiencing the following: Increased Time and Cost due to resource wastage arising from re-inventing the wheel i.e. re-creating vanilla processes from scratch, and incurring, otherwise avoidable, mistakes and errors by ignoring experience of others Sub-optimal Process Efficiency due to narrow, isolated view of processes thereby ignoring process inter-dependencies i.e. optimizing parts but not the whole, and resulting in lack of transparency and inter-departmental finger-pointing Embracing ARTS standards as a blue-print for establishing or managing or streamlining your retail operations can benefit you in the following ways: Improved Time-to-Market from parity with industry best-practice processes e.g. ARTS, thus avoiding “reinventing the wheel” for common retail processes and focusing more on customizing processes for differentiations, and lowering integration complexity and risk with a standardized vocabulary for exchange between internal and external i.e. partner systems Lower Operating Costs by embracing the ARTS enterprise-wide process reference model for developing and streamlining retail operations holistically instead of a narrow, silo-ed view, and  procuring IT systems in compliance with ARTS thus avoiding IT budget marginalization While parity with industry standards such as ARTS business process model by itself does not create a differentiation, it does however provide a higher starting point for bridging the strategy-execution gap in setting up and improving retail operations.

    Read the article

  • Webcast: Redefining the CRM and E-Commerce Experience with Oracle Exalogic

    - by Sanjeev Sharma
    Have your CRM applications been growing in cost and complexity? Do your customers want to reach you through more channels than ever? Does your business need to handle peak sales and customer service demand, but your IT budget only covers current needs? Learn how Oracle Exalogic combines Oracle hardware and software to achieve breakthrough performance and scalability, and how real Oracle customers are simplifying the deployment and management of their CRM applications. Register for the webcast here.

    Read the article

  • Customer Experience Management for Retail 2.0 - part 2 / 2

    - by Sanjeev Sharma
    In the previous post, i discussed some of the key trends shaping up in the retail industry, their implications and the challenges facing retailers seeking to regain control of the buyer-seller relationship. Is Customer Experience Management the panacea for the ailing retailers who are now awakening to the power of the consumer? Quite honestly, customer acquisition, retention and satisfaction have been top of mind for retailers for quite some time now. The missing piece of this puzzle is bringing all those countless hours of strategy and planning to fruition. This is more of an execution gap than anything else. Although technology has made consumers more informed, more mobile and more social, customer experience is still largely defined by delivering on the following: Consistent experiences, whether shopping online or offline Personalize-able interaction ("mass market" sounds good as an internal strategy but not when you are a buyer!) Timely order fulfillment, if not pro-active notification of delays Below is a concept architecture for streamlining front-end, mid-office and back-end interfaces through shared process to achieve consistency and efficiency in managing the customer experience from order capture to order provisioning.

    Read the article

  • The Best BPM Journey: More Exciting Destinations with Process Accelerators

    - by Cesare Rotundo
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Oracle Open World (OOW) earlier this month has been a great occasion to discuss with our BPM customers. It was interesting to hear definite patterns emerging from those conversations: “BPM is a journey”, “experiences to share”, “our organization now understands what BPM is”, and my favorite (with some caveats): “BPM is like wine tasting, once you start, you want to try more”. These customers have started their journey, climbed up the learning curve, and reached a vantage point that allows them to see their next BPM destination. They see the next few processes they are going to tackle and improve with BPM. These processes/destinations target both horizontal processes where BPM replaces or coordinates manual activities, and critical industry processes that the company needs to improve to compete and deliver increasing value. Each new destination generates value, allowing the organization to reduce the cost of manual processes that were not supported by apps/custom development, and increase efficiency of end-to-end processes partially covered by apps/custom dev. The question we wanted to answer is how to help organizations experience deeper success with BPM, by increasing their awareness of the potential for reaching new targets, and equipping them with the right tools. We decided that we needed to identify destinations, and plot routes to show the fastest path to those destinations. In the end we want to enable customers to reach “Process Excellence”: continuously set new targets and consistently and efficiently reach them. The result is Oracle Process Accelerators (PA), solutions built using the rich functionality in Oracle BPM Suite. PAs offers a rapidly expanding list of exciting destinations. Our launch of the latest installment of Process Accelerators at Oracle Open World includes new Industry-focused solutions such as Public Sector Incident Reporting and Financial Services Loan Origination, and improved other horizontal PAs, including Travel Request Management, Document Routing and Approval, and Internal Service Requests. Just before OOW we had extended the Oracle deployment of Travel Request Management, riding the enthusiastic response from early adopters among travelers (employees), management and support (approvers). “Getting there first” means being among the first to extract value from the PA approach, while acquiring deeper insights into the customers’ perspective. This is especially noteworthy when it comes to PAs, a set of solutions designed to be quickly deployed and iteratively improved by customers. The OOW launch has generated immediate feedback from customers, non-customers, analysts, and partners. They all confirmed that both Business and IT at organizations benefit from PAs when it comes to exploring the potential for BPM to improve their business processes. PAs help customers visualize what can be done with BPM, and PAs are made to be extended: you can see your destination, change the path to fit your needs, and deploy. We're discovering new destinations/processes that the market wants us to support, generic enough across industries and within industries. We'll keep on building sets of requirements, deliver functional design, construct solutions using Oracle BPM, and test them not only functionally but for performance, scalability, clustering, making them robust, product-quality. Delivering BPM solutions with product-grade quality is the equivalent of following a tried-and-tested path on a map. Do you know of existing destinations in your industry? If yes, we can draw a path to innovative processes together.

    Read the article

  • BPM in Financial Services Industry

    - by Sanjeev Sharma
    The following series of blog posts discuss common BPM use-cases in the Financial Services industry: Financial institutions view compliance as a regulatory burden that incurs a high initial capital outlay and recurring costs. By its very nature regulation takes a prescriptive, common-for-all, approach to managing financial and non-financial risk. Needless to say, no longer does mere compliance with regulation will lead to sustainable differentiation. For details, check out the 2 part series on managing operational risk of financial services process (part 1 / part 2). Payments processing is a central activity for financial institutions, especially retail banks, and intermediaries that provided clearing and settlement services. Visibility of payments processing is essentially about the ability to track payments and handle payments exceptions as payments flow from initiation to settlement. For details, check out the 2 part series on improving visibility of payments processing (part 1 / part 2).

    Read the article

  • Looking for some thoughts on an image printing app

    - by Alex
    Hey All, Im looking for thoughts/advice. I have an upcoming project (all .net) that will require the following: pulls data once a day from an online service provider based on certain criteria. saves data locally for reference and reporting the data thats pulled will be used to create gift cards. So after the data is loaded, a process will run to generate "virtual cards" and send them to a network printer. Once printed, the system will updated the local data recording a successful or failed print. My initial thought was to create a windows service to pull the data...but then I couldnt decide how I was going to put a "virtual card" together and get it to print. Then I considered doing it as a WPF app. I figure that will give me access to the graphics and printing ability. Maybe neither of these are the right direction....Any ideas or thoughts would be greatly appreciated. Alex

    Read the article

  • Thoughts on Free Splunk

    - by dan_vitch
    I am considering implementing Splunk at my company but am leery about the financial investment. I noticed there is a free version of Splunk that seem to be good enough. Can anyone tell me if you are using the free version at your company? Do you find the free version to be adequate, or just a springboard for the eventual purchase?

    Read the article

  • Thoughts on home NAS server

    - by user826955
    I currently have a NAS with a 2x2TB HDD 1x16GB SSD layout on a mini-itx atom board. The NAS is in a Lian Li PC-Q07 case. On this system I was running freebsd 8 with a gmirror raid 1 setup, which was enough for my needs. So far I was using the NAS for: Fileserver with AFP protocol (only mac clients used) SVN server hosting all my source trees of my projects JIRA (performance was okay-ish) Timemachine backup for the macs The power consumption was about 38W, although I did not put HDDs asleep when unused (I think this is not possible in a raid setup). I liked the NAS because: the performance was good through gigabit LAN (enough for my needs) power consumption was good its a pretty small case and fits in one of my cupboards I disliked the NAS a bit because: it was a bit noisy, the Q07 case vibrated a good amount because of the HDDs. I switched the NAS off every evening I do not have a real backup of the data on the NAS, only the internal raid 1 as safety. I really dont want to loose my source trees under no circumstances, so I would really be sleeping better if I knew I had regular backups somewhere. Recently, the board seemed to have died, I can't boot anymore. Thus, I was thinking about a redesign of my NAS (I still have to find out what parts are broken, I probably need to replace the mainboard and SSD. HDDs seem to be okay). First of all, I was wondering what other users have as backup for their NAS? Are you actually using a second NAS, and regularly copying over the data to have it safe? Or is there any better solution to this? I was thinking about getting a cheap NAS like the synology DS112j with only one disk, and use rsync or something similar to regularly copy data over to the second NAS (wake the second NAS upon start, shut it down after copy). Although this approach seems somewhat weird, It would have the benefit (?) that I could use a single disk instead of raid in the main NAS, and put the disk asleep when idle, and have the NAS running 24/7 with low energy consumption (I found no way to do this with a gmirror setup). Is there any recommended backup solution for a small NAS? Then I was thinking about a different raid setup. Since I have to buy a new mainboard as well as SSD, I might as well switch over to a i3 board with more ram, and also switch to ZFS. I am not familar with ZFS, I've never used it, but I read and hear much about it. Would it be viable to set up a ZFS storage with only 2 disks? Can I easily extend this storage with more disks, once I choose to add some? I could maybe get a new case like the Fractal Design Array R2 which has more 3,5" slots. I could as well get another 2 disks, but I would prefer sticking with the existing 2 for enegery/heat/noise reasons. Should I go for a ZFS storage or stick to my gmirror setup? I would also like to keep freebsd as operating system, and also I dont need any web gui or something (that is, I dont need/want to use FreeNAS or Openfiler etc). Does anyone maybe have a sample setup in use so I can compare energy consumption/noise/software setup? Any guidance towards the NAS of my dreams (silent, low energy, safe w/ backups) much appreciated.

    Read the article

  • What are your thoughts on Raven DB?

    - by Ronnie Overby
    What are your thoughts on Raven DB? I see this below my title: The question you're asking appears subjective and is likely to be closed. Please don't do that. I think the question is legit because: Raven DB is brand-spanking-new. RDBMS's are probably the de facto for data persistence for .net developers, which Raven DB is not. Given these points, I would like to know your general opinions. Admittedly, the question is sort of broad. That is intentional, because I am trying to learn as much about it as possible, however here are some of my initial concerns and questions: What about using Raven DB for data storage in a shared web hosting environment, since Raven DB is interacted with through HTTP? Are there any areas that Raven DB is particularly well or not well suited for? How does it rank among alternatives, from a .net developer's perspective?

    Read the article

  • Thoughts on Static Code Analysis Warning CA1806 for TryParse calls

    - by Tim
    I was wondering what people's thoughts were on the CA1806 (DoNotIgnoreMethodResults) Static Code Analysis warning when using FxCop. I have several cases where I use Int32.TryParse to pull in internal configuration information that was saved in a file. I end up with a lot of code that looks like: Int32.TryParse(someString, NumberStyles.Integer, CultureInfo.InvariantCulture, out intResult); MSDN says the default result of intResult is zero if something fails, which is exactly what I want. Unfortunately, this code will trigger CA1806 when performing static code analysis. It seems like a lot of redundant/useless code to fix the errors with something like the following: bool success = Int32.TryParse(someString, NumberStyles.Integer, CultureInfo.InvariantCulture, out intResult); if (!success) { intResult= 0; } Should I suppress this message or bite the bullet and add all this redundant error checking? Or maybe someone has a better idea for handling a case like this? Thanks!

    Read the article

  • Thoughts/Input about Database Design for a CMS

    - by dallasclark
    I'm just about to expand the functionality of our own CMS but was thinking of restructuring the database to make it simpler to add/edit data types and values. Currently, the CMS is quite flat - the CMS requires a field in the database for every type of stored value (manually created). The first option that comes to mind is simply a table which keeps the data types (ie: Address 1, Suburb, Email Address etc) and another table which holds values for each of these data types. Just like how Wordpress keeps values in the 'options' table, PHP serialize would be used to store an array of values. The second option is how Drupal works, the CMS creates tables for every data type. Unlike Wordpress, this can be a bit of an overkill but really useful for SQL queries when ordering and grouping by a particular value. What's everyone's thoughts?

    Read the article

  • Creating a proper CMS thoughts

    - by dallasclark
    I'm just about to expand the functionality of our own CMS but was thinking of restructuring the database to make it simpler to add/edit data types and values. Currently, the CMS is quite flat - the CMS requires a field in the database for every type of stored value. The first option that comes to mind is simply a table which keeps the data types (ie: Address 1, Suburb, Email Address etc) and another table which holds values for each of these data types. Just like how Wordpress keeps values in the 'options' table, serialize would be used to store an array of values. The second option is how Drupal works, the CMS creates tables for every data type. Unlike Wordpress, this can be a bit of an overkill but really useful for SQL queries when ordering and grouping by a particular value. What's everyone's thoughts?

    Read the article

  • Thoughts on try-catch blocks

    - by John Boker
    What are your thoughts on code that looks like this: public void doSomething() { try { // actual code goes here } catch (Exception ex) { throw; } } The problem I see is the actual error is not handled, just throwing the exception in a different place. I find it more difficult to debug because i don't get a line number where the actual problem is. So my question is why would this be good? ---- EDIT ---- From the answers it looks like most people are saying it's pointless to do this with no custom or specific exceptions being caught. That's what i wanted comments on, when no specific exception is being caught. I can see the point of actually doing something with a caught exception, just not the way this code is.

    Read the article

  • thoughts on making a widget?

    - by Haroldo
    I'm only going to be able to get each site that wants my widget, to copy and paste the code block in once.. So it needs to be really future proof. I've thought about this for a while and this is the widget code i've come up with : <script type="text/javascript" src="http://www.mydomain.com/my-future-proof-widget.js"></script> <div id="mywidget"></div> would this be the best plan? could this create any limitations? any other thoughts on widgetry?!

    Read the article

  • Setting up a multi-site CMS, collecting thoughts about the DB schema

    - by Ben Fransen
    Hello all, I'm collecting some thoughts about creating a multisite CMS. In my opinion there are two major approaches. All data is stored into 1 database, giving me the advantage of single point of updates; Seperated databases, so each client has its own database. Giving me the advantage to measure bandwith. Option 1 gives me the disadvantage of measuring bandwith while option is giving me the disadvantage of a single point of update structure. Are there any generic approaches for creating a sort of update system? So my clients can download a small package (maybe a zip with a conf file to tell the updatescript where to put all the files and how to extend the database??) Do you guys have some thougths about the best solution for a situation like this? I have my own webserver, full access to all resources and I'm developing in PHP with MySQL as DBMS. I hope to hear from you and I surely appreciate any effort you make to help me further! Greets from Holland, Ben Fransen

    Read the article

  • Long running stats process - thoughts on language choice?

    - by Josh
    I am on a LAMP stack for a website I am managing. There is a need to roll up usage statistics (a variety of things related to our desktop product), and I initially tackled the problem with PHP (being that I had a bunch of classes to work with the data already). All worked well on my dev box which was using 5.3 Long story short, 5.1 memory management seems to suck a lot worse, and I've had to do a lot of fooling to get the long term roll up scripts to run in a fixed memory space. Our server guys are unwilling to upgrade php at this time. I've since moved my dev server back to 5.1 so I don't run into this problem again... For mining of mysql databases to roll up statistics for different periods and resolutions, potentially running a process that does this all the time in the future (as opposed to on a cron schedule), what language choice do you recommend? I was looking at python (I know it more or less), java (don't know it that well), sticking it out with php (know it quite well). Thanks for any suggestions. Josh

    Read the article

  • Thoughts on moving to Maven in an enterprise environment

    - by Josh Kerr
    I'm interested in hearing from those who either A) use Maven in an enterprise environment or B) tried to use Maven in an enterprise environment. I work for a large company that is contemplating bringing in Maven into our environment. Currently we use OpenMake to build/merge and home-grown software to deploy code to 100+ servers running various platforms (eg. WAS and JBoss). OpenMake works fine for us however Maven does have some ideal features, most importantly being dependency management, but is it viable in a large environment? Also what headaches have/did you incur, if any, in maintaining a Maven environment. Side note, I've read http://stackoverflow.com/questions/861382/why-does-maven-have-such-a-bad-rep, http://stackoverflow.com/questions/303853/what-are-your-impressions-of-maven, and a few other posts. It's interesting seeing the split between developers.

    Read the article

  • need thoughts on my interview question - .net, c#

    - by uno
    One of the questions I was asked was that I have a database table with following columns pid - unique identifier orderid - varchar(20) documentid - int documentpath - varchar(250) currentLocation - varchar(250) newlocation - varchar(250) status - varchar(15) I have to write a c# app to move the files from currentlocation to newlocation and update status column as either 'SUCCESS' or 'FAILURE'. This was my answer Create a List of all the records using linq Create a command object which would be perform moving files using foreach, invoke a delegate to move the files - use endinvoke to capture any exception and update the db accordingly I was told that command pattern and delegate did not fit the bill here - i was aksed to think and implement a more favorable GoF pattern. Not sure what they were looking for - In this day and age, do candidates keep a lot of info on head as one always has google to find any answer and come up with solution.

    Read the article

  • Thoughts on security model to store credit card details

    - by Faisal Abid
    Here is the model we are using to store the CC details how secure does this look? All our information is encrypted using public key encryption and the keypair is user dependent (its generated on the server and the private key is symmetric encrypted using the users password which is also Hashed on the database) So basically on first run the user sends in his password via a SSL connection and the password is used with the addition of salt to generate an MD5 hash, also the password is used to encrypt the private key and the private key is stored on the server. When the user wants to make a payment, he sends his password. The password decrypts the private key, and the private key decrypts the CC details and the CC details are charged.

    Read the article

  • Thoughts on a Shoutbox anyone?

    - by sologhost
    I'm wanting to create a shoutbox, though I'm wondering if there is another way to go about this rather than using setInterval to query the database for new shouts every number of seconds. Honestly, I don't like having to go about it this way. Seems a bit redundant and repetitive and just plain old wrong. Not to mention the blinking of the shouts as it grabs the data. So I'm wondering on how the professionals do this? I mean, I've seen shoutboxes that work surperb and doesn't seem to be using any setInterval or setTimeout javascript functions to do this. Can anyone suggest any ideas or an approach to this that doesn't use setInterval or setTimeout?? Thanks :)

    Read the article

  • PHP CodeIgniter Framework - Thoughts on developing with it?

    - by Sootah
    I've been reviewing different frameworks to use for my next couple of major web applications, and after days of research am almost set on using CodeIgniter. The reason I'm leaning towards CI is that so far it looks to be the best suited for me. It doesn't require constant command-line access (I am currently using shared hosting; the projects do not warrant a dedicate server yet), nothing special has to be installed on the server running it (you just upload the framework to the root of whatever your developing), and they appear to have some excellent documentation, videos, and tutorials on how to get started. Do any of you have experience with CodeIgniter? If so, what is your opinion of it and its features? What had you developed with it, and what types of applications is it best suited to create? I certainly don't want to get into a situation where I'm trying to bend a framework to do something that it isn't well-suited for. Both of my projects will be database-driven apps that will require user registration, the ability to manipulate data that is specific to their account (their posts, listings, user account details, etc), amongst other things. Also, if you have any other PHP framework suggestions, I am open to them. Thanks in advance for your help! -Sootah

    Read the article

  • Thoughts on GoGrid vs EC2

    - by Jason
    I am currently hosting my SaaS application at GoGrid (Microsoft stack). Here's what I have: Database Server - physical box, 12 GB RAM, 2 X Quad Core CPU (2.13 GHz Xeon E5506) 2 Web / App servers - cloud servers, 2 GB RAM, 2 VCPUs 300 GB monthly bandwidth I am paying around $900 / month for this. My web / app servers are busting at the seams and need to be upgraded to 4 GB of RAM. I also need a firewall, and GoGrid just added this service for an additional $200. After the upgrade, I will be paying around $1,400. I started looking at Amazon EC2, specifically this config: Database server - "High Memory Double Extra Large Instance" - 34 GB RAM, 13 EC2 compute units 2 Web / App servers - "Large Instance" - 7.5 GB RAM, 4 EC2 compute units If I go with 1 year reserved instances, my upfront cost would be $4,500 and my monthly would be $700. This comes to $1,075 / month when amortized. Amazon also includes a firewall for free. Here are my questions: Do any of you have experience running a database (especially SQL Server) on an EC2 instance? How did it perform compared to a dedicated machine? One of my major concerns is with disk I/O. Amazon's description of a compute unit is fairly vague. Any ideas on how the CPU performance on the database servers would compare? I am hoping that the Amazon solution will provide significantly better performance than my current or even improved GoGrid setup. Having a virtual database server would also be nice in terms of availability. Right now I would be in serious trouble if I had any hardware issues. Thanks for any insight...

    Read the article

  • Thoughts about alternatives to barplot-with-error-bars

    - by gd047
    I was thinking of an alternative to the barplot-with-error-bars plot. To get an idea by example, I roughly 'sketched' what I mean using the following code library(plotrix) plot(0:12,type="n",axes=FALSE) gradient.rect(1,0,3,8,col=smoothColors("red",38,"red"),border=NA,gradient="y") gradient.rect(4,0,6,6,col=smoothColors("blue",38,"blue"),border=NA,gradient="y") lines(c(2,2),c(5.5,10.5)) lines(c(2-.5,2+.5),c(10.5,10.5)) lines(c(2-.5,2+.5),c(5.5,5.5)) lines(c(5,5),c(4.5,7.5)) lines(c(5-.5,5+.5),c(7.5,7.5)) lines(c(5-.5,5+.5),c(4.5,4.5)) gradient.rect(7,8,9,10.5,col=smoothColors("red",100,"white"),border=NA,gradient="y") gradient.rect(7,5.5,9,8,col=smoothColors("white",100,"red"),border=NA,gradient="y") lines(c(7,9),c(8,8),lwd=3) gradient.rect(10,6,12,7.5,col=smoothColors("blue",100,"white"),border=NA,gradient="y") gradient.rect(10,4.5,12,6,col=smoothColors("white",100,"blue"),border=NA,gradient="y") lines(c(10,12),c(6,6),lwd=3) The idea was to use bars like the ones in the second pair, instead of those in the first. However, there is something that I would like to change in the colors. Instead of a linear gradient fill, I would like to adjust the color intensity in accordance with the values of the pdf of the mean estimator. Do you think it is possible? A slightly different idea (where gradient fill isn't an issue) was to use one (or 2 back-to-back) bell curve(s) filled with (solid) color, instead of a rectangle. See for example the shape that corresponds to the letter F here. In that case the bell-curve(s) should (ideally) be drawn using something like plot(x, dnorm(x, mean = my.mean, sd = std.error.of.the.mean)) I have no idea though, of a way to draw rotated (and filled with color) bell curves. Of course, all of the above may be freely judged as midnight springtime dreams :-)

    Read the article

  • Thoughts on streamlining multiple .Net apps

    - by John Virgolino
    We have a series of ASP.Net applications that have been written over the course of 8 years. Mostly in the first 3-4 years. They have been running quite well with little maintenance, but new functionality is being requested and we are running into IDE and platform issues. The apps were written in .Net 1.x and 2.x and run in separate spaces but are presented as a single suite of applications which use a common navigation toolbar (implemented as a user control). Every time we want to add something to a menu in the nav we have to modify it in all the apps which is a pain. Also, the various versions of Crystal reports and that we used tables to organize the visual elements and we end up with a mess, especially with all the multi-platform .Net versions running. We need to streamline the suite of apps and make it easier to add on new apps without a hassle. We also need to bring all these apps under one .Net platform and IDE. In addition, there is a WordPress blog styled to match the style of the application suite "integrated" into the UI and a link to a MediaWiki Wiki application as well. My current thinking is to use an open source content management system (CMS) like Joomla (PHP based unfortunately, but it works well) as the user interface framework for style templating and menu management. Joomla's article management would allow us to migrate the Wiki content into articles which could be published without interfering with the .Net apps. Then essentially use an IFrame within an "article" to "host" the .Net application, then... Upgrade the .Net apps to VS2010, strip out all the common header/footer controls and migrate the styles to use the style sheets used in the CMS. As I write this, I certainly realize this is a lot of work and there are optimization issues which this may cause as well as using IFrames seems a bit like cheating and I've read about issues with IFrames. I know that we could use .Net application styling, but it seems like a lot more work (not sure really). Also, the use of a CMS to handle the blog and wiki also seems appealing, unless there is a .Net CMS out there that can handle all of these requirements. Given this information, I am looking to know if I am totally going in the wrong direction? We tried to use open source and integrate it over time, but not this has become hard to maintain. Am I not aware of some technology out there that will meet our requirements? Did we do this right and should we just focus on getting the .Net streamlined? I understand that no matter what we do, it's going to be a lot of work. The communities considerable experience would be helpful. Thanks!! PS - A complete rewrite is not an option.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >