Search Results

Search found 2814 results on 113 pages for 'feedback'.

Page 69/113 | < Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >

  • RTS Movement + Navigation + Destination

    - by Oliver Jones
    I'm looking into building my own simple RTS game, and I'm trying to get my head around the movement of single, and multi selected units. (Developing in Unity) After much research, I now know that its a bigger task than I thought. So I need to break it down. I already have an A* navigation system with static obstacles taken into account. I don't want to worry about dynamic local avoidance right now. So I guess my first break down question would be: How would I go about moving mutli units to the same location. Right now - my units move to the location, but because they're all told to go to the same location, they start to 'fight' over one another to get there. I think theres two paths to go down: 1) Give each individual unit a separate destination point that is close to the 'master' destination point - and get the units to move to that. 2) Group my selected units in a flock formation, and move that entire flock group towards the destination point. Question about each path: 1a) How can I go about finding a suitable destination point that is close to the master destination? What happens if there isn't a suitable destination point? 1b) Would this be more CPU heavy? As it has to compute a path for each unit? (40 unit count). 2a) Is this a good idea? Not giving the units themselves a destination, but instead the flock (which holds the units within). The units within the flock could then maintain a formation (local avoidance) - though, again local avoidance is not an issue at this current time. 2b) Not sure what results I would get if I have a flock of 5 units, or a flock of 40 units, as the radius would be greater - which might mess up my A* navigation system. In other words: A flock of 2 units will be able to move down an alleyway, but a flock of 40 wont. But my nav system won't take that into account. I would appreciate any feedback. Kind regards, Ollie Jones

    Read the article

  • Idea to develop a caching server between IIS and SQL Server

    - by John
    I work on a few high traffic websites that all share the same database and that are all heavily database driven. Our SQL server is max-ed out and, although we have already implemented many changes that have helped but the server is still working too hard. We employ some caching in our website but the type of queries we use negate using SQL dependency caching. We tried SQL replication to try and kind of load balance but that didn't prove very successful because the replication process is quite demanding on the servers too and it needed to be done frequently as it is important that data is up to date. We do use a Varnish web caching server (Linux based) to take a bit of the load off both the web and database server but as a lot of the sites are customised based on the user we can only do so much. Anyway, the reason for this question... Varnish gave me an idea for a possible application that might help in this situation. Just like Varnish sits between a web browser and the web server and caches response from the web server, I was wondering about the possibility of creating something that sits between the web server and the database server. Imagine that all SQL queries go through this SQL caching server. If it's a first time query then it will get recorded, and the result requested from the SQL server and stored locally on the cache server. If it's a repeat request within a set time then the result gets retrieved from the local copy without the query being sent to the SQL server. The caching server could also take advantage of SQL dependency caching notifications. This seems like a good idea in theory. There's still the same amount of data moving back and forward from the web server, but the SQL server is relieved of the work of processing the repeat queries. I wonder about how difficult it would be to build a service that sort of emulates requests and responses from SQL server, whether SQL server's own caching is doing enough of this already that this wouldn't be a benefit, or even if someone has done this before and I haven't found it? I would welcome any feedback or any references to any relevant projects.

    Read the article

  • Microsoft Desktop Player is a Valuable Tool for IT Pro’s

    - by Mysticgeek
    If you are an IT Professional, a new education tool introduced by Microsoft is the MS Desktop Player. Today we take a look at what it has to offer, from Webcasts, White Papers, Training Videos, and more. Microsoft Desktop Player You can run the player from the website (shown here) or download the application for use on your local machine (link below). It allows you to easily access MS training and information in a central interface. To get the Desktop version, download the .msi file from the site… And run through the installer…   When you first start out, enter in if you’re an IT Pro, Developer and your role. Then you can decide on the resources you’re looking for such as Exchange Server, SharePoint, Windows 7, Security…etc. Here is an example of checking out a Podcast on Office 2007 setup and configuration from TechNet radio. Under Settings you can customize your search results and local resources. This helps you narrow down pertinent information for your needs. If you find something you really like, hover the pointer over the screen and you can add it to your library, share it, send feedback, and check for additional resources. If you don’t need items in your library they can be easily deleted. Under the News tab you get previews of Microsoft news items, clicking on it will open the full article in a separate browser. While you’re watching a presentation you can show or hide the details related to it. Conclusion Microsoft Desktop Player is currently in Beta, but has a lot of cool features to offer for your learning needs. You can easily find Podcasts, Webcasts, and more without having to browse all over the place. In our experience we didn’t notice any bugs, and what it offers so far works well. If you’re a geek who’s constantly browsing TechNet and other Microsoft learning sites, this helps keep everything consolidated in one app.  Download Microsoft Desktop Player Similar Articles Productive Geek Tips Fixing When Windows Media Player Library Won’t Let You Add FilesBuilt-in Quick Launch Hotkeys in Windows VistaNew Vista Syntax for Opening Control Panel Items from the Command-lineHow to Get Virtual Desktops on Windows XPWindows 7 Welcome Screen Taking Forever? Here’s the Fix (Maybe) TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 Zoofs, find the most popular tweeted YouTube videos Video preview of new Windows Live Essentials 21 Cursor Packs for XP, Vista & 7 Map the Stars with Stellarium Use ILovePDF To Split and Merge PDF Files TimeToMeet is a Simple Online Meeting Planning Tool

    Read the article

  • Oracle Tutor: Are Documented Policies and Procedures Necessary?

    - by emily.chorba(at)oracle.com
    People refer to policies and procedures with a variety of expressions including business process documentation, standard operating procedures (SOPs), department operating procedures (DOPs), work instructions, specifications, and so on. For our purpose here, policies and procedures mean a set of documents that describe an organization's policies (rules) for operation and the procedures (containing tasks performed by individuals) to fulfill the policies. When an organization documents policies and procedures properly, they can be the strategic link between an organization's vision and its daily operations. Policies and procedures are often necessary because of some external requirement, such as environmental compliance or other governmental regulations. One example of an external requirement would be the American Sarbanes-Oxley Act, requiring full openness in accounting practices. Here are a few other examples of business issues that necessitate writing policies and procedures: Operational needs -- policies and procedures ensure fundamental processes are performed in a consistent way that meets the organization's needs. Risk management -- policies and procedures are identified by the Committee of Sponsoring Organizations of the Treadway Commission (COSO) as a control activity needed to manage risk. Continuous improvement -- Procedures can improve processes by building important internal communication practices. Compliance -- Well-defined and documented processes (i.e. procedures, training materials) along with records that demonstrate process capability can demonstrate an effective internal control system compliant with regulations and standards. In addition to helping with the above business issues, policies and procedures can support the basic needs of employees and management. Well documented and easy to access policies and procedures: allow employees to understand their roles and responsibilities within predefined limits and to stay on the accepted path indentified by the organization's management provide clarity to the reader when dealing with accountability issues or activities that are of critical importance allow management to guide operations without constant intervention allow managers to control events in advance and prevent employees from making costly mistakes Can you think of another way organizations can meet the above needs of management and their employees in place of documented Policies and Procedures? Probably not, but we would love your feedback on this question. And that my friends, is why documented policies and procedures are very necessary. Learn MoreFor more information about Tutor, visit Oracle.com or the Tutor Blog. Post your questions at the Tutor Forum. Emily ChorbaPrinciple Product Manager Oracle Tutor & BPM

    Read the article

  • Invitation: Oracle EMEA Analytics & Data Integration Partner Forum, 12th November 2012, London (UK)

    - by rituchhibber
    Oracle PartnerNetwork | Account | Feedback INVITATIONORACLE EMEA ANALYTICS & DATA INTEGRATION PARTNER FORUM MONDAY 12TH NOVEMBER, 2012 IN LONDON (UK) Dear partner Come to hear the latest news from Oracle OpenWorld about Oracle BI & Data Integration, and propel your business growth as an Oracle partner. This event should appeal to BI or Data Integration specialised partners, Executives, Sales, Pre-sales and Solution architects: with a choice of participation in the plenary day and then a set of special interest (technical) sessions. The follow on breakout sessions from the 13th November provide deeper dives and technical training for those of you who wish to stay for more detailed and hands-on workshops.Keynote: Andrew Sutherland, SVP Oracle Technology. Data Integration can bring great value to your customers by moving data to transform their business experiences in Oracle pan-EMEA Data Integration business development and opportunities for partners. Hot agenda items will include: The Fusion Middleware Stack: Engineered to work together A complete Analytics and Data Integration Solution Architecture: Big Data and Little Data combined In-Memory Analytics for Extreme Insight Latest Product Development roadmap for Data Integration and Analytics Venue: Oracles London CITY Moorgate OfficesDuring this event you can learn about partner success stories, participate in an array of break-out sessions, exchange information with other partners and enjoy a vibrant panel discussion. Places are limited, Register your seat today! To register to this event CLICK HERE Note: Registration for the conference and the deeper dives and technical training is free of charge to OPN member Partners, but you will be responsible for your own travel and hotel expenses. Event Schedule November 12th:Day 1 Main Plenary Session : Full day, starting 10.30 am.Oracle Hosted Dinner in the Evening November 13th:onwards Architecture Masterclass : IM Reference Architecture – Big Data and Little Data combined(1 day) BI-Apps Bootcamp(4-days) Oracle Data Integrator and Oracle Enterprise Data Quality workshop(1-day) Golden Gate Workshop(1-day) For further information and detail download the Agenda (pdf) or contact Michael Hallett at [email protected] look forward to seeing you in there. Best regards, Mike HallettAlliances and Channels DirectorBI & EPM Oracle EMEAM.No: +44 7831 276 989 [email protected] Duncan HarveyBusiness Development Directorfor Data IntegrationM.No: +420 608 283 [email protected] Milomir VojvodicBusiness Development Manager for Data IntegrationM.No: +420 608 283 [email protected] Copyright © 2012, Oracle and/or its affiliates. All rights reserved. Contact PBC | Legal Notices and Terms of Use | Privacy

    Read the article

  • Warm Reception By Partners at EMEA Manageability Forum

    - by Get_Specialized!
    For the EMEA Partners that were able to attend the event in Istanbul Turkey, thank you for your attendance and feedback at the event. As you can see, the weather kept most of inside during the event and at times there was even some snow.  And while it may have been chilly outside, there was a warm reception from Partners who traveled from all over EMEA to hear from other Oracle Specialized Partners and subject matter experts about the opportunities and benefits of Oracle Enterprise Manager and Exadata Specialization. Here you can see David Robo, Oracle Technology Director for Manageability kicking off the event followed later by Patrick Rood, Oracle Indirect Manageability Business. A special thank you to all the Partner speakers including Ron Tolido, VP and CTO of Application Services Continental Europe Capgemini, who delivered a very innovative keynote where many in attendance learned that Black Swans do exist. And while at break, interactivity among partners continued and it was great to see such innovative partners who had listed their achieved specializations on their business cards. Here we can see Oracle Enterprise Manager customer, Turkish Oracle User Group board member and Blogger Gokhan Atil sharing his product experiences with others attending. Additionally, Christian Trieb of Paragon Data, also shared with other Partners what the German Oracle User Group (DOAG) was doing around manageability and invitation to submit papers for their next event. Here we can see at one of the breaks, one of the event organizers Javier Puerta (left), Oracle Director of Partner Programs, joined by Sebastiaan Vingerhoed (middle), Oracle EE & CIS Manager Manageability and speaker on Managing the Application Lifecycle, Julian Dontcheff (right), Global Head of Database Management at Accenture. Below is Julian Dontcheff's delivering his partner presentation on Exadata and Lifecycle Management. Just after his plane landed and 1 hour Turkish taxi experience to the event location, Julian still took the time to sit down with me and provide some extra insights on his experiences of managing the enterprise infrastructure with Oracle Enterprise Manager. Below is one of the Oracle Enterprise Management Product Management Team,  Mark McGill, Oracle Principal Product Manager, presenting to Partners on how you can perform Chargeback and Metering with Oracle Enterprise Manager 12c Cloud Control. Overall, it was a great event and an extra thank you to those OPN Specialized Partners who presented, to the Partners that attended, and to those Oracle team members who organized the event and presented.

    Read the article

  • How can I save my university's Computer Science & Engineering department? [closed]

    - by Blake
    I'm currently pursuing a B.S. in Computer Engineering at the University of Florida, and we're having a bit of a problem right now... The state recently passed a budget plan that cuts funding for higher education in Florida. The dean of UF's College of Engineering decided that the best way for us to absorb the blow is by executing the following plan: All of the Computer Engineering Degree programs, BS, MS and PhD, would be moved from the Computer & Information Science and Engineering Dept. to the Electrical and Computer Engineering Dept. along with most of the advising staff. Roughly half of the faculty would be offered the opportunity to move to Electrical/Computer Eng., Biomedical Eng., or Industrial/Systems Eng. Staff positions in CISE which are currently supporting research and graduate programs would be eliminated. The activities currently covered by TAs would be reassigned to faculty and the TA budget for CISE would be eliminated. Any faculty member who wishes to stay in CISE may do so, but with a revised assignment focused on teaching and advising. In short: our department (at least as we know it) is being decimated. Computer & Information Sciences & Engineering (one of 9 departments in the College of Engineering) is taking more than 50% of the cuts. If you're interested in reading the full proposal, you can access it here. A vast, VAST majority of the students and faculty in the department are vehemently opposed to this plan, however the dean is already taking measures to implement it. This is the only proposal on the table right now, and she has not entertained our requests for alternatives. She sees it as an obvious (albeit drastic) solution to our budget problem, citing that many other universities have combined Computer and Electrical Engineering departments. I'll bet those universities didn't have to eliminate an established department to get there, though. The budget goes into effect July 1, 2012 (this is non-negotiable), and the dean's proposal is currently set to be finalized some time next week. We don't have much time! My question to everyone here is this: Are we overreacting to this plan, or are we justified? And could you explain why or why not? It's obvious that CISE students will resist any cuts to our department, but I'm curious to see what other people in the field have to say. Any feedback is greatly appreciated. I will select the answer that saves our department. Just kidding, I'll pick the one that best explains why this is a good or bad decision for the dean to make. Please note that anything you say can and will be used to further our cause (and we might track you down if you provide a compelling argument against us).

    Read the article

  • Important Note for Enablement Service Pack 1 for UPK 3.6.1

    - by marc.santosusso
    The following was originally posted to one of the UPK communities on LinkedIn. Since this post generated some feedback that this information was not well-known, I thought it would be good to repost, which I've done with permission from Earl Sullivan. This is an FYI for those who have UPK 3.6.1 and applied the Enablement Pack 1. There is a manual database update that is needed to be run. Here is the information: To correct an issue with permissioning in the Library, this Service Pack, issued in March 2010, also contains scripts to update the database on the Oracle Database or MicrosoftSQL server. Once you have run the Setup.exe file for the Service Pack, the necessary script files can be found at the root of the folder where the Developer is installed. These scripts must be run manually according to the instructions below. To update a database located on an Oracle Database server manually: Run the Setup.exe to install the files for the Service Pack. Start SQL*Plus and login with the system account. At the command prompt, enter the path to the AlterSchemaObjects.sql script located at the root of the folder where the Developer is installed. and append the following parameters: schema_owner - There is a limit of 20 characters on the schema owner name. You can find this information in the web.config file located in the Repository.WS in the folder where the server is installed. password - The existing schema owner password. Statement with generic parameters: @C:\AlterSchemaObjects.sql schema_owner password 4. Run the AlterSchemaObjects.sql script. To update a database located on a Microsoft SQL server manually: Run the Setup.exe to install the files for the Service Pack. Log in to the database using the database administrator account. Open and edit the AlterDBObjects.sql file located at the root of the folder where the Developer is installed. Replace the ODServer text with the username used when the database was installed. You can find this information in the web.config file located in the Repository.WS folder in the folder where the server is installed. Change the database from master to the name of the existing Developer database and run the AlterDBObjects.sql script. Note: The database name is the initial catalog in the connection string in the web.config file. Editor's note: The database update fixes a problem with permissions where the permissions for a user will be incorrectly updated when a group that the user was removed from has their permissions changed.

    Read the article

  • SBUG Session: The Enterprise Cache

    - by EltonStoneman
    [Source: http://geekswithblogs.net/EltonStoneman] I did a session on "The Enterprise Cache" at the UK SOA/BPM User Group yesterday which generated some useful discussion. The proposal was for a dedicated caching layer which all app servers and service providers can hook into, sharing resources and common data. The architecture might end up like this: I'll update this post with a link to the slide deck once it's available. The next session will have Udi Dahan walking through nServiceBus, register on EventBrite if you want to come along. Synopsis Looked at the benefits and drawbacks of app-centric isolated caches, compared to an enterprise-wide shared cache running on dedicated nodes; Suggested issues and risks around caching including staleness of data, resource usage, performance and testing; Walked through a generic service cache implemented as a WCF behaviour – suitable for IIS- or BizTalk-hosted services - which I'll be releasing on CodePlex shortly; Listed common options for cache providers and their offerings. Discussion Cache usage. Different value propositions for utilising the cache: improved performance, isolation from underlying systems (e.g. service output caching can have a TTL large enough to cover downtime), reduced resource impact – CPU, memory, SQL and cost (e.g. caching results of paid-for services). Dedicated cache nodes. Preferred over in-host caching provided latency is acceptable. Depending on cache provider, can offer easy scalability and global replication so cache clients always use local nodes. Restriction of AppFabric Caching to Windows Server 2008 not viewed as a concern. Security. Limited security model in most cache providers. Options for securing cache content suggested as custom implementations. Obfuscating keys and serialized values may mean additional security is not needed. Depending on security requirements and architecture, can ensure cache servers only accessible to cache clients via IPsec. Staleness. Generally thought to be an overrated problem. Thinking in line with eventual consistency, that serving up stale data may not be a significant issue. Good technical arguments support this, although I suspect business users will be harder to persuade. Providers. Positive feedback for AppFabric Caching – speed, configurability and richness of the distributed model making it a good enterprise choice. .NET port of memcached well thought of for performance but lack of replication makes it less suitable for these shared scenarios. Replicated fork – repcached – untried and less active than memcached. NCache also well thought of, but Express version too limited for enterprise scenarios, and commercial versions look costly compared to AppFabric.

    Read the article

  • SQL SERVER – Quiz and Video – Introduction to Basics of a Query Hint

    - by pinaldave
    This blog post is inspired from SQL Architecture Basics Joes 2 Pros: Core Architecture concepts – SQL Exam Prep Series 70-433 – Volume 3. [Amazon] | [Flipkart] | [Kindle] | [IndiaPlaza] This is follow up blog post of my earlier blog post on the same subject - SQL SERVER – Introduction to Basics of a Query Hint – A Primer. In the article we discussed various basics terminology of the query hints. The article further covers following important concepts of query hints. Expecting Seek and getting a Scan Creating an index for improved optimization Implementing the query hint Above three are the most important concepts related to query hint and SQL Server.  There are many more things one has to learn but without beginners fundamentals one can’t learn the advanced  concepts. Let us have small quiz and check how many of you get the fundamentals right. Quiz 1) You have the following query: DECLARE @UlaChoice TinyInt SET @Type = 1 SELECT * FROM LegalActivity WHERE UlaChoice = @UlaChoice You have a nonclustered index named IX_Legal_Ula on the UlaChoice field. The Primary key is on the ID field and called PK_Legal_ID 99% of the time the value of the @UlaChoice is set to ‘YP101′. What query will achieve the best optimization for this query? SELECT * FROM LegalActivity WHERE UlaChoice = @UlaChoice WITH(INDEX(X_Legal_Ula)) SELECT * FROM LegalActivity WHERE UlaChoice = @UlaChoice WITH(INDEX(PK_Legal_ID)) SELECT * FROM LegalActivity WHERE UlaChoice = @UlaChoice OPTION (Optimize FOR(@UlaChoice = ‘YP101′)) 2) You have the following query: SELECT * FROM CurrentProducts WHERE ShortName = ‘Yoga Trip’ You have a nonclustered index on the ShortName field and the query runs an efficient index seek. You change your query to use a variable for ShortName and now you are using a slow index scan. What query hint can you use to get the same execution time as before? WITH LOCK FAST OPTIMIZE FOR MAXDOP READONLY Now make sure that you write down all the answers on the piece of paper. Watch following video and read earlier article over here. If you want to change the answer you still have chance. Solution 1) 3 2) 4 Now compare let us check the answers and compare your answers to following answers. I am very confident you will get them correct. Available at USA: Amazon India: Flipkart | IndiaPlaza Volume: 1, 2, 3, 4, 5 Please leave your feedback in the comment area for the quiz and video. Did you know all the answers of the quiz? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Joes 2 Pros, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Handling Coding Standards at Work (I'm not the boss)

    - by Josh Johnson
    I work on a small team, around 10 devs. We have no coding standards at all. There are certain things that have become the norm but some ways of doing things are completely disparate. My big one is indentation. Some use tabs, some use spaces, some use a different number of spaces, which creates a huge problem. I often end up with conflicts when I merge because someone used their IDE to auto format and they use a different character to indent than I do. I don't care which we use I just want us all to use the same one. Or else I'll open a file and some lines have curly brackets on the same line as the condition while others have them on the next line. Again, I don't mind which one so long as they are all the same. I've brought up the issue of standards to my direct manager, one on one and in group meetings, and he is not overly concerned about it (there are several others who share the same view as myself). I brought up my specific concern about indentation characters and he thought a better solution would be to, "create some kind of script that could convert all that when we push/pull from the repo." I suspect that he doesn't want to change and this solution seems overly complicated and prone to maintenance issues down the road (also, this addresses only one manifestation of a larger issue). Have any of you run into a similar situation at work? If so, how did you handle it? What would be some good points to help sell my boss on standards? Would starting a grass roots movement to create coding standards, among those of us who are interested, be a good idea? Am I being too particular, should I just let it go? Thank you all for your time. Note: Thanks everyone for the great feedback so far! To be clear, I don't want to dictate One Style To Rule Them All. I'm willing to concede my preferred way of doing something in favor of what suits everyone the best. I want consistency and I want this to be a democracy. I want it to be a group decision that everyone agrees on. True, not everyone will get their way, but I'm hoping that everyone will be mature enough to compromise for the betterment of the group. Note 2: Some people are getting caught up in the two examples I gave above. I'm more after the heart of the matter. It manifests itself with many examples: naming conventions, huge functions that should be broken up, should something go in a util or service, should something be a constant or injected, should we all use different versions of a dependency or the same, should an interface be used for this case, how should unit tests be set up, what should be unit tested, (Java specific) should we use annotations or external config. I could go on.

    Read the article

  • Kickstarter "last minute cold feet"

    - by mm24
    today I scheduled the publication of a video on kickstarter requesting approximately 5.000 $ in order to complete the iPhone shooter game I started 1 year ago after quitting my job. I invested more than 20.000$ in the game so far (for artwork, music, legal and accountant expenses) and I am now getting cold feet about my decision of publishing the video. The game is "nearly finished", in other words: the game mechanics are working but I still have some bugs to fix. Once I will have finished this (I hope will take me 1 or 2 weeks) I plan to start working on the actual level balancing (e.g. deciding the order of appearence of enemies for each level and balancing the number of hitpoints and strenght of bullets that the enemies have). Reasons for not publishing the video are: fear that the concept can be copied easily: the game is a shooter game set in a different environment (its a pretty cool one, believe me :)) and I am worried that someone might copy* the idea (I know, its the usual "I am worried story.."). A shooter game is one of the easiest game to implement and hence there will be hundreds game developer able to copy it by just adapting their existing code and changing graphics (not as straightforward). It took me one year to develop this because I was inexperienced plus there are approximately 6/7 months of work from the illustrator and there are 8 unique music tracks composed. The soundtrack of the video is the soundtrack of the game wich is not yet published and has not been deposited to a music society. I did create legally valid timestamps for the tracks and I am considering uploading the album on iTunes before publishing the video so I can have a certain publication date. But overall I am a bit scared and worried because I have never done this before and even the simple act of publishing an album requires me to read a long contract from the "aggregator company") which, even if I do have contracts with the musicians do worry me as I am not a U.S. resident and I am not familiar with the U.S. law system Reasons for publishing the video are: I almost run out of money (but this is not a real reason as I should have enough for one more month of development time) ...I kind of need extra money as, even if I do have money for 1 month of development I do not have money for marketing and for other expenses (e.g. accountant) It will create a fan base I could get some useful feedback from a wider range of beta testers It might create some pre-release buzz in case some blogger or game magazine likes the concept Anyone has had similar experiences? Is there a real risk that someone will copy the concept and implement it in a couple of months? Will the Kickstarter campaing be a good pre-release exposure for the gmae? Any refrences of similar projects/situations? Is it realistic that someone like ROVIO will copy the idea straight away?

    Read the article

  • Excel tables creation upon MySQL data import (new feature in MySQL for Excel 1.2.x)

    - by Javier Treviño
    In this blog post we are going to talk about one of the features included since MySQL for Excel 1.2.0, you can install the latest GA or maintenance version using the MySQL Installer or optionally you can download directly any GA or non-GA version from the MySQL Developer Zone. Remember how easy is to dump data from a MySQL table, view or stored procedure to an Excel worksheet? (If you don't you can check out this other post: How To - Guide to Importing Data from a MySQL Database to Excel using MySQL for Excel). In version 1.2.0 we introduced some advanced options for the Import MySQL Data operation regarding Excel tables. The Advanced Options dialog shown above is accessible from any Import Data dialog. When the Create an Excel table for the imported MySQL table data option is checked (which is by default), MySQL for Excel will create an Excel table (also known in Excel jargon as a ListObject) from the Excel range containing the imported MySQL data. This "little feature" enables the right-away usage of the Excel table in data analysis, like including it for summarization on a PivotTable, including a summarization row at the end of the table's data, sorting or filtering the table's data by clicking the drop-down button next to each column's header, among other actions. The Excel tables that are created automatically from imported MySQL data will have a name like [UserPrefix].<SchemaName>.<DbObjectName> for tables and views, and <Prefix>.<SchemaName>.<ProcedureName>.<ResultSetName> for stored procedures.  Notice the first piece of the name is an optional [UserPrefix], the prefix is only used if the Prefix Excel tables with the following text option is checked, notice that the suggested prefix is "MySQL" but it can be changed to whatever text is suitable for you. Excel tables must have a table style so they are easily identified. There are a lot of predefined Excel table styles, by default the MySqlDefault style is applied, which is the style you have seen applied to imported data for Edit Sessions, and which adds simple and elegant formatting to the table. If you wish to change it to any of the predefined Excel table style you can do it through the drop-down list on the Use style [[styles drop-down]] for the new Excel table option. Excel tables are the basic construction blocks for building data analysis or self-service Business Intelligence using other more advanced Excel tools like Power Pivot, Power View or Power Map. This feature empowers imported MySQL data to use it in more advanced ways.  We hope you give this and the other new features in the 1.2.x version family a try! Remember that your feedback is very important for us, so drop us a message and follow us: MySQL on Windows (this) Blog: https://blogs.oracle.com/MySqlOnWindows/ MySQL for Excel forum: http://forums.mysql.com/list.php?172 Facebook: http://www.facebook.com/mysql YouTube channel: https://www.youtube.com/user/MySQLChannel Cheers!

    Read the article

  • C# Domain-Driven Design Sample Released

    - by Artur Trosin
    In the post I want to declare that NDDD Sample application(s) is released and share the work with you. You can access it here: http://code.google.com/p/ndddsample. NDDDSample from functionality perspective matches DDDSample 1.1.0 which is based Java and on joint effort by Eric Evans' company Domain Language and the Swedish software consulting company Citerus. But because NDDDSample is based on .NET technologies those two implementations could not be matched directly. However concepts, practices, values, patterns, especially DDD, are cross-language and cross-platform :). Implementation of .NET version of the application was an interesting journey because now as .NET developer I better understand the differences positive and negative between these two platforms. Even there are those differences they can be overtaken, in many cases it was not so hard to match a java libs\framework with .NET during the implementation. Here is a list of technology stack: 1. .net 3.5 - framework 2. VS.NET 2008 - IDE 3. ASP.NET MVC2.0 - for administration and tracking UI 4. WCF - communication mechanism 5. NHibernate - ORM 6. Rhino Commons - Nhibernate session management, base classes for in memory unit tests 7. SqlLite - database 8. Windsor - inversion of control container 9. Windsor WCF facility - for better integration with NHibernate 10. MvcContrib - and in particular its Castle WindsorControllerFactory in order to enable IoC for controllers 11. WPF - for incident logging application 12. Moq - mocking lib used for unit tests 13. NUnit - unit testing framework 14. Log4net - logging framework 15. Cloud based on Azure SDK These are not the latest technologies, tools and libs for the moment but if there are someone thinks that it would be useful to migrate the sample to latest current technologies and versions please comment. Cloud version of the application is based on Azure emulated environment provided by the SDK, so it hasn't been tested on ‘real' Azure scenario (we just do not have access to it). Thanks to participants, Eugen Gorgan who was involved directly in development, Ruslan Rusu and Victor Lungu spend their free time to discuss .NET specific decisions, Eugen Navitaniuc helped with Java related questions. Also, big thank to Cornel Cretu, he designed a nice logo and helped with some browser incompatibility issues. Any review and feedback are welcome! Thank you, Artur Trosin

    Read the article

  • Refreshing imported MySQL data with MySQL for Excel

    - by Javier Rivera
    Welcome to another blog post from the MySQL for Excel Team. Today we're going to talk about a new feature included since MySQL for Excel 1.3.0, you can install the latest GA or maintenance version using the MySQL Installer or optionally you can download directly any GA or non-GA version from the MySQL Developer Zone.As some users suggested in our forums we should be maintaining the link between tables and Excel not only when editing data through the Edit MySQL Data option, but also when importing data via Import MySQL Data. Before 1.3.0 this process only provided you with an offline copy of the Table's data into Excel and you had no way to refresh that information from the DB later on. Now, with this new feature we'll show you how easy is to work with the latest available information at all times. This feature is transparent to you (it doesn't require additional steps to work as long as the users had the Create an Excel Table for the imported MySQL table data option enabled. To ensure you have this option checked, click over Advanced Options... after the Import Data dialog is displayed). The current blog post assumes you already know how to import data into excel, you could always take a look at our previous post How To - Guide to Importing Data from a MySQL Database to Excel using MySQL for Excel if you need further reference on that topic. After importing Data from a MySQL Table into Excel, you can refresh the data in 3 ways.1. Simply right click over the range of the imported data, to show the pop-up menu: Click over the Refresh button to obtain the latest copy of the data in the table. 2. Click the Refresh button on the Data ribbon: 3. Click the Refresh All button in the Data ribbon (beware this will refresh all Excel tables in the Workbook): Please take a note of a couple of details here, the first one is about the size of the table. If by the time you refresh the table new columns had been added to it, and you originally have imported all columns, the table will grow to the right. The same applies to rows, if the table has new rows and you did not limit the results , the table will grow to to the bottom of the sheet in Excel. The second detail you should take into account is this operation will overwrite any changes done to the cells after the table was originally imported or previously refreshed: Now with this new feature, imported data remains linked to the data source and is available to be updated at all times. It empowers the user to always be able to work with the latest version of the imported MySQL data. We hope you like this this new feature and give it a try! Remember that your feedback is very important for us, so drop us a message with your comments, suggestions for this or other features and follow us at our social media channels: MySQL on Windows (this) Blog: https://blogs.oracle.com/MySqlOnWindows/ MySQL for Excel forum: http://forums.mysql.com/list.php?172 Facebook: http://www.facebook.com/mysql YouTube channel: https://www.youtube.com/user/MySQLChannel Thanks!

    Read the article

  • Database design and performance impact

    - by Craige
    I have a database design issue that I'm not quite sure how to approach, nor if the benefits out weigh the costs. I'm hoping some P.SE members can give some feedback on my suggested design, as well as any similar experiences they may have came across. As it goes, I am building an application that has large reporting demands. Speed is an important issue, as there will be peak usages throughout the year. This application/database has a multiple-level, many-to-many relationship. eg object a object b object c object d object b has relationship to object a object c has relationship to object b, a object d has relationship to object c, b, a Theoretically, this could go on for unlimited levels, though logic dictates it could only go so far. My idea here, to speed up reporting, would be to create a syndicate table that acts as a global many-to-many join table. In this table (with the given example), one might see: +----------+-----------+---------+ | child_id | parent_id | type_id | +----------+-----------+---------+ | b | a | 1 | | c | b | 2 | | c | a | 3 | | d | c | 4 | | d | b | 5 | | d | a | 6 | +----------+-----------+---------+ Where a, b, c and d would translate to their respective ID's in their respective tables. So, for ease of reporting all of a which exist on object d, one could query SELECT * FROM `syndicates` ... JOINS TO child and parent tables ... WHERE parent_id=a and type_id=6; rather than having a query with a join to each level up the chain. The Problem This table grows exponentially, and in a given year, could easily grow past 20,000 records for one client. Given multiple clients over multiple years, this table will VERY quickly explode to millions of records and beyond. Now, the database will, in time, be partitioned across multiple servers, but I would like (as most would) to keep the number of servers as low as possible while still offering flexibility. Also writes and updates would be exponentially longer (though possibly not noticeable to the end user) as there would be multiple inserts/updates/scans on this table to keep it in sync. Am I going in the right direction here, or am I way off track. What would you do in a similar situation? This solution seems overly complex, but allows the greatest flexibility and fastest read-operations. Sidenote 1 - This structure allows me to add new levels to the tree easily. Sidenote 2 - The database querying for this database is done through an ORM framework.

    Read the article

  • Silverlight Cream for April 08, 2010 -- #834

    - by Dave Campbell
    In this Issue: Michael Washington, Phil Middlemiss, Yochay Kiriaty, Giorgetti Alessandro, Mike Snow, John Papa, SilverLaw, smartyP, and Pete Brown. Shoutouts: Steve Wortham sent me a link to his RegEx tool that is written in Silverlight... definitely worth a look: Introducing Code Hinting for Regular Expressions Joshua Blake posted his MIX10 materials: MIX10 NUI session sample code From SilverlightCream.com: Silverlight MVVM: An (Overly) Simplified Explanation Michael Washington has a tutorial up for getting your arms (and head) around MVVM and Silverlight, and Blend too. A Chrome and Glass Theme - Part 3 Phil Middlemiss has part 3 up of his tutorial series on building an awesome theme for Silverlight... he's styling the textbox and checkbox this time around, and improving the button too Automatic Rotation Support or Automatic Multi-Orientation Layout Support for Windows Phone Yochay Kiriaty is giving up some WP7 goodness with his post on Multi-Orientation Layout Support ... yeah I had to say it twice myself :) good links and all the code in addition to the good blog post Silverlight Navigation Framework: resolve the pages using an IoC container Giorgetti Alessandro has some pretty cool code up as a proof of concept of using an IoC container with the Navigation Framework of Silverlight 4. Silverlight Tip of the Day No. 109 – Attach to Process Debugging Mike Snow is back doing Tips of the Day... and number 109 is showing how to attach the debugger to a running Silverlight app. Silverlight TV 20: Community Driven Development with WCF RIA Services In his latest Silverlight TV episode, John Papa talks with Jeff Handley about RIA Services, and how feedback from the community helped shape the product. ChildWindowMouseScrollResizeBehavior - Silverlight 3 SilverLaw has a new Behavior up at the Expression Gallery that gives you resizing on a ChildWindow using the Mouse Wheel. Creating a Windows Phone 7 Metro Style Pivot Application [Part 3] smartyP has the 3rd and final episode for his WP7 Pivot up, and this one includes not only the source but a video tutorial. Layout Rounding Pete Brown talks about Layout Rounding and it has nothing to do with rounding corners... it has to do with rounding off where your objects get placed pixel-wise ... I've blogged about this seemingly-anti-aliasing more than once... Pete has the real answer Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Getting from a user-story to code while using TDD (scrum)

    - by Ittai
    I'm getting into scrum and TDD and I think I have some confusion which I'd like to get your feedback about. Let's assume I have a user-story in my backlog, in order for me to start developing it as part of TDD I need to have requirements, right so far? Is it true to say that the product manager and the QA should be responsible for taking the user-story and breaking it down to acceptance tests? I think the above is true since the acceptance tests need to be formal, so they can be used as tests, but also human readable so that the product can approve they are the requirements, right? Is it also true that I later take these acceptance tests and use them as my requirements, i.e. they are a set of use-cases which I implement (through TDD)? I hope I'm not making too much of a mess but that's the current flow I have in mind right now. Update I think my initial intentions were unclear so I'll try to rephrase. I want to know more details about the scrum flow of turning a user-story into code while using TDD. The starting point is obvious, a user surfaces a need (or the user's representative as the product) which is a short 1-2 lines description in the known format and that is added to the product backlog. When there is a spring planning meeting user-stories are taken from the backlog and assigned to developers. In order for a developer to write code they need requirements (especially in TDD since the requirements are what the tests are derived from). When, by whom and to which format are the requirements compiled? What I had in mind was that the product and QA define the requirements via acceptance tests (I'm thinking of automatic using FitNesse or the sort but that's not the core I think) which help to serve 2 purposes at the same time: They define "Done" properly. They give a developer something to derive tests from. I wasn't sure when these were written (before the sprint they're picked then that might be a waste since additional information will arrive or the story won't be picked, during the iteration then the developer might get stuck waiting for them...)

    Read the article

  • Real-time Big Data Analytics is a reality for StubHub with Oracle Advanced Analytics

    - by Mark Hornick
    What can you use for a comprehensive platform for real-time analytics? How can you process big data volumes for near-real-time recommendations and dramatically reduce fraud? Learn in this video what Stubhub achieved with Oracle R Enterprise from the Oracle Advanced Analytics option to Oracle Database, and read more on their story here. Advanced analytics solutions that impact the bottom line of a business are challenging due to the range of skills and individuals involved in realizing such solutions. While we hear a lot about the role of the data scientist, that role is but one piece of the puzzle. Advanced analytics solutions also have an operationalization aspect that also requires close proximity to where the transactional activity occurs. The data scientist needs access to the right data with which to model the business problem. This involves IT for data collection, management, and administration, as well as ensuring zero downtime (a website needs to be up 24x7). This also involves working with the data scientist to keep predictive models refreshed with the latest scripts. Integrating advanced analytics solutions into enterprise apps involves not just generating predictions, but supporting the whole life-cycle from data collection, to model building, model assessment, and then outcome assessment and feedback to the model building process again. Application and web interface designers need to take into account how end users will see and use the advanced analytics results, e.g., supporting operations staff that need to handle the potentially fraudulent transactions. As just described, advanced analytics projects can be "complicated" from just a human perspective. The extent to which software can simplify the interactions among users and systems will increase the likelihood of project success. The ability to quickly operationalize advanced analytics projects and demonstrate measurable value, means the difference between a successful project and just a nice research report. By standardizing on Oracle Database and SQL invocation of R, along with in-database modeling as found in Oracle Advanced Analytics, expedient model deployment and zero downtime for refreshing models becomes a reality. Meanwhile, data scientists are also able to explore leading edge techniques available in open source. The Oracle solution propels the entire organization forward to realize the value of advanced analytics.

    Read the article

  • Register now! Exadata Partner Community Forum in Lisbon, Apr. 13-14

    - by javier.puerta(at)oracle.com
      Oracle PartnerNetwork | Account | Feedback INVITATION ORACLE EMEA EXADATA PARTNER COMMUNITY FORUM 13-14 APRIL 2011, SHERATON HOTEL, LISBON, PORTUGAL THE BEST PLACE TO BE IN 2011 FOR ORACLE EXADATA PARTNERS! Venue & Hotel Accomodation: Sheraton Lisboa Hotel & SpaAddress: Rua Latino Coelho, 1City: LisbonCountry: Portugal Dear Exadata partner, I am delighted to invite you to the first Exadata Partner Community Forum for EMEA partners which will take place in Lisbon, Portugal on 13-14 April, 2011. This event will provide you with the great opportunity to listen to our Oracle Executives, our specialist's keynotes on future sales & product strategy, and also to share sales and implementation experiences with other partners as a key part of the agenda. Do not miss this tremendous learning experience with a complete event starting from the initial phases of the sales cycle to the project implementation, including the following highlights: Update on Oracle's strategy and road map for Exadata Market drivers and business opportunities Selling Exadata: Discovery and qualification process. Accessing Oracle and partners' Proof-of-Concept infrastructure Case studies from partners who have successfully sold and implemented projects and developed a service business around Exadata Exadata OPN enablement and specialization And there's more... On the evening of April 13th you will be treated to a pleasant dinner at the Sheraton Hotel where you will also have another networking opportunity in a relaxing atmosphere, with a beautiful panoramic view of the city of Lisbon. Please view the agenda for more details. Registration: The EMEA Exadata Community Forum is not to be missed so to reserve your place please register here before March 1st. ** There is no registration fee for Oracle partners. Accommodation: The Sheraton Hotel has created a customized hotel registration portal for this event. Please click here for immediate hotel booking & rates. Details are also provided on Registration Event portal. Further information or assistance on venue logistics, please contact Angela Cadran. For other questions, please contact Javier Puerta. Javier Puerta, Core Technology Partner Programs, Oracle EMEA Copyright © 2011, Oracle and/or its affiliates.All rights reserved. Contact Us | Legal Notices and Terms of Use | Privacy Statement

    Read the article

  • SQL SERVER – Quiz and Video – Introduction to SQL Error Actions

    - by pinaldave
    This blog post is inspired from SQL Programming Joes 2 Pros: Programming and Development for Microsoft SQL Server 2008 – SQL Exam Prep Series 70-433 – Volume 4. [Amazon] | [Flipkart] | [Kindle] | [IndiaPlaza] This is follow up blog post of my earlier blog post on the same subject - SQL SERVER – Introduction to SQL Error Actions – A Primer. In the article we discussed various basics terminology of the error handling. The article further covers following important concepts of error handling. Introduction to SQL Error Actions Statement Termination Scope Abortion Batch Termination Above three are the most important concepts related to error handling and SQL Server.  There are many more things one has to learn but without beginners fundamentals one can’t learn the advanced concepts. Let us have small quiz and check how many of you get the fundamentals right. Quiz 1.) Which SQL Server error action happens for errors with a severity of 11-16 when you set the XACT_ABORT setting to ON? You will get Statement Termination. You will get Scope Abortion. You will get Batch Abortion. You will get Connection Termination. SQL Server will pick the error action. 2.) Which SQL Server error action happens for errors with a severity of 11-16 when you set the XACT_ABORT setting to OFF? You will get Statement Termination You will get Scope Abortion You will get Batch Abortion You will get Connection Termination SQL Server will pick the error action Now make sure that you write down all the answers on the piece of paper. Watch following video and read earlier article over here. If you want to change the answer you still have chance. Solution 1) 3 2) 5 Now compare let us check the answers and compare your answers to following answers. I am very confident you will get them correct. Available at USA: Amazon India: Flipkart | IndiaPlaza Volume: 1, 2, 3, 4, 5 Please leave your feedback in the comment area for the quiz and video. Did you know all the answers of the quiz? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Joes 2 Pros, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Victor Grazi, Java Champion!

    - by Tori Wieldt
    Congratulations to Victor Grazi, who has been made a Java Champion! He was nominated by his peers and selected as a Java Champion for his experience as a developer, and his work in the Java and Open Source communities. Grazi is a Java evangelist and serves on the Executive Committee of the Java Community Process, representing Credit Suisse - the first non-technology vendor on the JCP. He also arranges the NY Java SIG meetings at Credit Suisse's New York campus each month, and he says it has been a valuable networking opportunity. He also is the spec lead for JSR 354, the Java Money and Currency API. Grazi has been building real time financial systems in Java since JDK version 1.02! In 1996, the internet was just starting to happen, Grazi started a dot com called Supermarkets to Go, that provided an on-line shopping presence to supermarkets and grocers. Grazi wrote most of the code, which was a great opportunity for him to learn Java and UI development, as well as database management. Next, he went to work at Bank of NY building a trading system. He studied for Java certification, and he noted that getting his certification was a game changer because it helped him started to learn the nuances of the Java language. He has held other development positions, "You may have noticed that you don't get as much junk mail from Citibank as you used to - that is thanks to one of my projects!" he told us. Grazi joined Credit Suisse in 2005 and is currently Vice President on the central architecture team. Grazi is proud of his open source project, Java Concurrent Animated, a series of animations that visualize the functionality of the components in the java.util.concurrent library. "It has afforded me the opportunity to speak around the globe" and because of it, has discovered that he really enjoys doing public presentations. He is a fine addition to the Java Champions program. The Java Champions are an exclusive group of passionate Java technology and community leaders who are community-nominated and selected under a project sponsored by Oracle. Nominees are named and selected through a peer review process. Java Champions get the opportunity to provide feedback, ideas, and direction that will help Oracle grow the Java Platform. This interchange may be in the form of technical discussions and/or community-building activities with Oracle's Java Development and Developer Program teams.

    Read the article

  • Newbie worried about CASE tool.

    - by Jason Evans
    Hi there. I'm looking for some guidance on CASE tools and whether my concerns are valid. Recently I was in a meeting between my employer and an external software company which have a CASE tool currently in beta. They demonstrated this tool to us, showing how you build a UML model in Enterprise Architect (or something like it) and then, through their tool, that UML model is transformed into a Visual Studio project, with C# files, stored procedures for SQL Server, code for the data layer, WCF stuff, logging code and allsorts. Now, admittedly, I don't see the point in this, as in I'm not convinced it will save that much time (plus it feels like overkill). The tool authors said that a trial of the tool at another company had saved a team there 5 weeks of development time (from 6 weeks down to about 1 week) using this tool. I find the accuracy of that estimate hard to believe. My main concern is whether using this tool is going slow down my productivity. For example - Say I have a UML model which I built a VS solution from. Now, I want to rename a class method to something else; will this mean having to update the UML model first and then rebuilding the code? Is this how case tools normally work? Something I will need to check with the authors is the structure of the generated VS solution. I like the Domain Driven Design way of project structure - Infrstructure, Services, Model, etc. I doubt very much this tool will do that. Also, I've been playing around with Entity Framework Code First and think it's a great way to build the data model. I have nice repositories, unit of work classes and other design patterns that work well with EF. I have data anootations and stuff like that working great. By not having EF (the CASE tool uses it's own data layer code) I'm concerned that this tool's data layer code might not be a nice to integrate in the UoW pattern, repositories, etc. This I will need to verify when I get a closer look at the generated code. What are other people's experiences with CASE tools? Am I being paranoid about nothing? Am I being unfair - are my negativities unfounded? EDIT: I like to use TDD/BDD for building my code, and using a CASE tool looks like it will make this difficult. Again, any feedback on this would be great. Cheers. Jas.

    Read the article

  • Announcing MySQL Enterprise Backup 3.7.1

    - by Hema Sridharan
    The MySQL Enterprise Backup (MEB) Team is pleased to announce the release of MEB 3.7.1, a maintenance release version that includes bug fixes and enhancements to some of the existing features. The most important feature introduced in this release is Automatic Incremental Backup. The new  argument syntax for the --incremental-base option is introduced which makes it simpler to perform automatic incremental backups. When the options --incremental & --incremental-base=history:last_backup are combined, the mysqlbackup command  uses the metadata in the mysql.backup_history table to determine the LSN to use as the lower limit of the incremental backup. You no longer need to keep track of the actual LSN (as in the option --start-lsn=LSN) or even the location of the previous backup (as in the option --incremental-base=dir:directory_path)This release also incudes various bug fixes related to some options used in MEB. The most important are few of them as listed below,1. The option --force now allows overwriting InnoDB data and log files in  combination with the apply-log and apply-incremental-backup options, and replacing the image file in combination with the backup-to-image and backup-dir-to-image options. 2. Resolved a bug that prevented MEB to interface with third-party storage managers to execute backup and restore jobs in combination with the SBT interface and associated --sbt* options for mysqlbackup. 3. When MEB is run with the copy-back option,  it now displays warnings as existing files are overwritten.For more information about other bug fixes, please refer to the change-log in http://dev.mysql.com/doc/mysql-enterprise-backup/3.7/en/meb-news.html The complete MEB documentation is located at http://dev.mysql.com/doc/mysql-enterprise-backup/3.7/en/index.html. You will find the binaries for the new release in My Oracle Support,  https://support.oracle.comChoose the "Patches & Updates" tab, and then use the "Product or Family (Advanced Search)" feature. If you haven't looked at MEB 3.7.1 recently, please do so now and let us know how MEB works for you. Send your feedback to [email protected].

    Read the article

  • Bash arrays and case statements - review my script

    - by Felipe Alvarez
    #!/bin/bash # Change the environment in which you are currently working. # Actually, it calls the relevant 'lettus.sh' script if [ "${BASH_SOURCE[0]}" == "$0" ]; then echo "Try running this as \". chenv $1\"" exit 0 fi usage(){ echo "Usage: . ${PROG} -- Shows a list of user-selectable environments." echo " . ${PROG} [env] -- Select environment." echo " . ${PROG} -h -- Shows this usage screen." return } showEnv(){ # check if index0 exists, assume we have at least the first (zeroth) element #if [ -z "${envList}" ]; then if [ -z "${envList[0]}" ]; then echo "array \$envList is empty! " >&2 return 1 fi # Show all elements in array (0 -> n-1) for i in $(seq 0 $((${#envList[@]} - 1))); do echo ${envList[$i]} done return } setEnv(){ if [ -z "$1" ]; then usage; return fi case $1 in cold) FILE_TO_SOURCE=/u2/tip/conf/ctrl/lettus_cold.sh;; coles) FILE_TO_SOURCE=/u2/tip/conf/ctrl/lettus_coles.sh;; fc) FILE_TO_SOURCE=/u2/tip/conf/ctrl/lettus_fc.sh;; fcrm) FILE_TO_SOURCE=/u2/tip/conf/ctrl/lettus_fcrm.sh;; stable) FILE_TO_SOURCE=/u2/tip/conf/ctrl/lettus_stable.sh;; tip) FILE_TO_SOURCE=/u2/tip/conf/ctrl/lettus_tip.sh;; uat) FILE_TO_SOURCE=/u2/tip/conf/ctrl/lettus_uat.sh;; wellmdc) FILE_TO_SOURCE=/u2/tip/conf/ctrl/lettus_wellmdc.sh;; *) usage; return;; esac if $IS_SOURCED; then echo "Environment \"$1\" selected." echo "Now sourcing file \"$FILE_TO_SOURCE\"..." . ${FILE_TO_SOURCE} return else return 1 fi } main(){ if [ -z "$1" ]; then showEnv; return fi case $1 in -h) usage;; *) setEnv $1;; esac return } PROG="chenv" # create array of user-selectable environments envList=( cold coles fc fcrm stable tip uat wellmdc ) main "$@" return If I could, I'd like to get some feedback on a better way to accomplish any of the following: run through the case statement make script trivally simple to maintain/upgrade/update

    Read the article

< Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >