Search Results

Search found 3253 results on 131 pages for 'progress bars'.

Page 74/131 | < Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >

  • Doesn't boot after installation

    - by jchysk
    Downloaded Ubuntu 12.04.1-alternate-amd64 Installed to USB stick Integrity check fails on ./install/netboot/ubuntu-installer/amd64/pxelinux.cfg/default but that seems to be a known bug where the file isn't included in the alternative 64-bit ISO and shouldn't affect installation. I ignore it and proceed on. For partitioning on 2 SSD Drives: Partition 300MB and 63GB on both RAID1 the 300MB and 63GBs Set the 300MB to EXT4 on /boot Encrypt the rest as MD1 and set it for LVM Create two volumes from MD1: 4GB swap and 59GB to / I go through the installation and get to the point where it says everything is ready and to take the media out so as to boot from the drives I receive the error "Error: No video mode activated." on startup I've read that this can be solved by running "cp /usr/share/grub/*.pf2 /boot/grub" and then updating grub but I can't get to a place where I can actually run this command. In rescue mode I can get to a shell from installer with /boot mounted to /target. So from there I can run "cp /cdrom/boot/grub/font.pf2 /target/grub/" but can't figure out a way to get it to update grub after that or know how what to change in manually updating the grub.cfg file. If I try other devices to mount the root filesystem I get the error "An error occurred while mounting the device you entered for your root file system". It just sits on the video mode error and doesn't progress further. Googling around it seems like people see the error briefly before it continues booting, not getting stuck on it the way I am which leads me to believe that error may be unrelated to Ubuntu not booting. So any ideas as to what I should try next or what needs to be done to install Ubuntu and get it to boot would be helpful.

    Read the article

  • Good Scoop: The PeopleSoft/IBM Backstory

    - by [email protected]
    By Brian Dayton on April 12, 2010 11:15 AM Sometimes you're searching for something online and you find an unrelated, bonus nugget. Last week I stumbled across an interesting blog post from Chris Heller of a PeopleSoft consulting shop in San Ramon, CA called Grey Sparling. I don't know these guys. But Chris, who apparently used to work on the PeopleTools team, wrote a great article on a pre-acquisition, would-be deal between IBM and PeopleSoft that would have standardized PeopleSoft on IBM technology. The behind-the-scenes perspective is interesting. His commentary on the challenges that the company and PeopleSoft customers would have encountered if the deal had gone through was also interesting: · "No common ownership. It's hard enough to get large groups of people to work together when they work for the same company, but with two separate companies it is much, much harder. Even within Oracle, progress on Fusion applications was slow until Thomas Kurian took over Fusion applications in addition to Fusion middleware." · "No customer buy-in. PeopleSoft customers weren't asking for a conversion to WebSphere, so the fact that doing that could have helped PeopleSoft stay independent wouldn't have meant much to them, especially since the cost of moving to whatever a "PeopleSoft built on WebSphere" would have been significant." · "No executive buy-in. This is related to the previous point, but it's worth calling out separately. If Oracle had walked away and the deal with IBM had gone through, and PeopleSoft customers got put through the wringer as part of WebSphere move, all of the PeopleSoft project teams would be put in the awkward position of explaining to their management why these additional costs and headaches were happening. Essentially they would need to "sell" the partnership internally to their own management team. That's not a fun conversation to have." I'm not surprised that something like this was in the works. But I did find the inside scoop and Heller's perspective on the challenges particularly interesting. Especially the advantages of aligning development of applications and infrastructure development under one roof. Here's a link to the whole blog entry.

    Read the article

  • How Mature is Your Database Change Management Process?

    - by Ben Rees
    .dbd-banner p{ font-size:0.75em; padding:0 0 10px; margin:0 } .dbd-banner p span{ color:#675C6D; } .dbd-banner p:last-child{ padding:0; } @media ALL and (max-width:640px){ .dbd-banner{ background:#f0f0f0; padding:5px; color:#333; margin-top: 5px; } } -- Database Delivery Patterns & Practices Further Reading Organization and team processes How do you get your database schema changes live, on to your production system? As your team of developers and DBAs are working on the changes to the database to support your business-critical applications, how do these updates wend their way through from dev environments, possibly to QA, hopefully through pre-production and eventually to production in a controlled, reliable and repeatable way? In this article, I describe a model we use to try and understand the different stages that customers go through as their database change management processes mature, from the very basic and manual, through to advanced continuous delivery practices. I also provide a simple chart that will help you determine “How mature is our database change management process?” This process of managing changes to the database – which all of us who have worked in application/database development have had to deal with in one form or another – is sometimes known as Database Change Management (even if we’ve never used the term ourselves). And it’s a difficult process, often painfully so. Some developers take the approach of “I’ve no idea how my changes get live – I just write the stored procedures and add columns to the tables. It’s someone else’s problem to get this stuff live. I think we’ve got a DBA somewhere who deals with it – I don’t know, I’ve never met him/her”. I know I used to work that way. I worked that way because I assumed that making the updates to production was a trivial task – how hard can it be? Pause the application for half an hour in the middle of the night, copy over the changes to the app and the database, and switch it back on again? Voila! But somehow it never seemed that easy. And it certainly was never that easy for database changes. Why? Because you can’t just overwrite the old database with the new version. Databases have a state – more specifically 4Tb of critical data built up over the last 12 years of running your business, and if your quick hotfix happened to accidentally delete that 4Tb of data, then you’re “Looking for a new role” pretty quickly after the failed release. There are a lot of other reasons why a managed database change management process is important for organisations, besides job security, not least: Frequency of releases. Many business managers are feeling the pressure to get functionality out to their users sooner, quicker and more reliably. The new book (which I highly recommend) Lean Enterprise by Jez Humble, Barry O’Reilly and Joanne Molesky provides a great discussion on how many enterprises are having to move towards a leaner, more frequent release cycle to maintain their competitive advantage. It’s no longer acceptable to release once per year, leaving your customers waiting all year for changes they desperately need (and expect) Auditing and compliance. SOX, HIPAA and other compliance frameworks have demanded that companies implement proper processes for managing changes to their databases, whether managing schema changes, making sure that the data itself is being looked after correctly or other mechanisms that provide an audit trail of changes. We’ve found, at Red Gate that we have a very wide range of customers using every possible form of database change management imaginable. Everything from “Nothing – I just fix the schema on production from my laptop when things go wrong, and write it down in my notebook” to “A full Continuous Delivery process – any change made by a dev gets checked in and recorded, fully tested (including performance tests) before a (tested) release is made available to our Release Management system, ready for live deployment!”. And everything in between of course. Because of the vast number of customers using so many different approaches we found ourselves struggling to keep on top of what everyone was doing – struggling to identify patterns in customers’ behavior. This is useful for us, because we want to try and fit the products we have to different needs – different products are relevant to different customers and we waste everyone’s time (most notably, our customers’) if we’re suggesting products that aren’t appropriate for them. If someone visited a sports store, looking to embark on a new fitness program, and the store assistant suggested the latest $10,000 multi-gym, complete with multiple weights mechanisms, dumb-bells, pull-up bars and so on, then he’s likely to lose that customer. All he needed was a pair of running shoes! To solve this issue – in an attempt to simplify how we understand our customers and our offerings – we built a model. This is a an attempt at trying to classify our customers in to some sort of model or “Customer Maturity Framework” as we rather grandly term it, which somehow simplifies our understanding of what our customers are doing. The great statistician, George Box (amongst other things, the “Box” in the Box-Jenkins time series model) gave us the famous quote: “Essentially all models are wrong, but some are useful” We’ve taken this quote to heart – we know it’s a gross over-simplification of the real world of how users work with complex legacy and new database developments. Almost nobody precisely fits in to one of our categories. But we hope it’s useful and interesting. There are actually a number of similar models that exist for more general application delivery. We’ve found these from ThoughtWorks/Forrester, from InfoQ and others, and initially we tried just taking these models and replacing the word “application” for “database”. However, we hit a problem. From talking to our customers we know that users are far less further down the road of mature database change management than they are for application development. As a simple example, no application developer, who wants to keep his/her job would develop an application for an organisation without source controlling that code. Sure, he/she might not be using an advanced Gitflow branching methodology but they’ll certainly be making sure their code gets managed in a repo somewhere with all the benefits of history, auditing and so on. But this certainly isn’t the case (yet) for the database – a very large segment of the people we speak to have no source control set up for their databases whatsoever, even at the most basic level (for example, keeping change scripts in a source control system somewhere). By the way, if this is you, Red Gate has a great whitepaper here, on the barriers people face getting a source control process implemented at their organisations. This difference in maturity is the same as you move in to areas such as continuous integration (common amongst app developers, relatively rare for database developers) and automated release management (growing amongst app developers, very rare for the database). So, when we created the model we started from scratch and biased the levels of maturity towards what we actually see amongst our customers. But, what are these stages? And what level are you? The table below describes our definitions for four levels of maturity – Baseline, Beginner, Intermediate and Advanced. As I say, this is a model – you won’t fit any of these categories perfectly, but hopefully one will ring true more than others. We’ve also created a PDF with a flow chart to help you find which of these groups most closely matches your team:  Download the Database Delivery Maturity Framework PDF here   Level D1 – Baseline Work directly on live databases Sometimes work directly in production Generate manual scripts for releases. Sometimes use a product like SQL Compare or similar to do this Any tests that we might have are run manually Level D2 – Beginner Have some ad-hoc DB version control such as manually adding upgrade scripts to a version control system Attempt is made to keep production in sync with development environments There is some documentation and planning of manual deployments Some basic automated DB testing in process Level D3 – Intermediate The database is fully version-controlled with a product like Red Gate SQL Source Control or SSDT Database environments are managed Production environment schema is reproducible from the source control system There are some automated tests Have looked at using migration scripts for difficult database refactoring cases Level D4 – Advanced Using continuous integration for database changes Build, testing and deployment of DB changes carried out through a proper database release process Fully automated tests Production system is monitored for fast feedback to developers   Does this model reflect your team at all? Where are you on this journey? We’d be very interested in knowing how you get on. We’re doing a lot of work at the moment, at Red Gate, trying to help people progress through these stages. For example, if you’re currently not source controlling your database, then this is a natural next step. If you are already source controlling your database, what about the next stage – continuous integration and automated release management? To help understand these issues, there’s a summary of the Red Gate Database Delivery learning program on our site, alongside a Patterns and Practices library here on Simple-Talk and a Training Academy section on our documentation site to help you get up and running with the tools you need to progress. All feedback is welcome and it would be great to hear where you find yourself on this journey! This article is part of our database delivery patterns & practices series on Simple Talk. Find more articles for version control, automated testing, continuous integration & deployment.

    Read the article

  • How do you keep cool when production system goes down?

    - by Mag20
    This has happened to most of us... You come to work one day. Everything seems normal: the sun is shining, birds are chirping, but you notice a couple of weird things on your way to work like deja vu with cat in matrix. You get into office, there are a lot of phones ringing, but could be that they are just doing a new sales promotion. You settle in, when you notice a dark cloud hovering over you. It takes you a couple of moments, but you recognize the cloud is your boss. Usually he checks on you every morning with his "Soooo Peeeeter, how about those TCP/IP reports?" routine, but today he forgot everything about common manners and rudely invaded your personal space. No "Good Morning", just some drooling, grunts and curses. He reminds you a bit of neanderthal who is trying to get away from cyber tooth tiger, fear and panic all compressed in a tight ball. You try to decipher the new language that he created since yesterday and you start understanding that something bad happened overnight - production system went down. Now, your system is usually used by clients during regular working hours from 9-5, but for whatever reason you didn't get any alerts on your beeper (for people under 30 - beeper was like a mobile phone that could only ring and tell you who beeped you). Need to remember to charge it next time. So it is 8:45am, the system MUST be up at 9am. Every 10 seconds, your boss lets out yet another curse which communicates to you that another customer is having problems getting into the system. Also several account managers are now hovering over your boss trying to make him understand how clients are REALLY REALLY suffering. Everyone is depending on you to get the system up ASAP and at the same time hinder your progress by constantly distracting you. How do you keep cool in a situation like this?

    Read the article

  • Hosting WCF over internet

    - by user1876804
    I am pretty new to exposing the WCF services hosted on IIS over internet. I will be deploying a WCF service over IIS(6 or 7) and would like to expose this service over the internet. This will be hosted in a corporate network having firewall, I want this service to be accessible over the internet(should be able to pass through the firewall) I did some research on this and some of the pointers I got: 1. I could use wsHTTPBinding or nettcpbinding (the client is intended to be .net client). Which of the bindings is preferable. 2. To overcome the corporate I came across DMZ server, what is the purpose of this and do I really need to use this). 3. I will be passing some files between the client and server, and the client needs to know the progress of the processing on server and the end result. I know this is a very broad question to ask, but could anyone give me pointers where I could start on this and what approach to take for this problem.

    Read the article

  • Which problem(s) do YOU want to see solved?

    - by buu700
    My team and I are meeting tonight to come up with a business plan and some community input would be amazing. I've been mulling over this issue for the past few months and bouncing ideas off of others, and now I'd finally like some input from the community. I have come up with a fair selection of ideas, but most of those amount to either fun projects which could potentially be profitable, or otherwise solid business models that have one or two major hurdles (usually related to resources or legality). For our team meeting tonight, my idea is to take inventory of our available skills, resources, and compelling problems which interest us. The last is where I would greatly appreciate some community input. Hell, even entire business ideas/plans would be appreciated. No matter how big or small your thoughts, any input would be appreciated. We're a team of computer scientists, so our business will be primarily based around software/technology/Web solutions. Among my relevant available resources (entire Internet aside), I have the following: A pretty reliable connection to an SEO company a large production company. A stash of fairly powerful server hardware. A fast network with static IPs. The backend for Hackswipe, which includes credit card payment processing and a Google Voice-based SMS gateway. This work in progress design for something completely unrelated but which is backed by some fairly decent infrastructure. Direct access to the experts in just about any relevant field (on-campus Carnegie Mellon professors). A sexual relationship with the baron of a small nation. For further down the line, some investor relationships. Not likely to be so relevant, but a decent social media presence (Stack Overflow reputation, modship in some major reddits, various tech forums). The source code for Eugene fucking McCabe. Pooled with the other team members, the list of projects we can build off of would be longer (including an Android app). So, what are your thoughts? Crossposted to reddit

    Read the article

  • Two JavaFX Community Rock Stars Join Oracle

    - by Tori Wieldt
    from Sharat Chander, Director - Java Technology Outreach: These past 24+ months have proved momentous for Oracle's stewardship of Java.  A little over 2 years ago when Oracle completed its acquisition of Sun, a lot of community speculation arose regarding Oracle's Java commitment.  Whether the fears and concerns were legitimate or not, the only way to emphatically demonstrate Oracle's seriousness with moving Java forward was through positive action.  In 2010, Oracle committed to putting Java back on schedule whereby large gaps between release trains would be a thing of the past.  And in 2011, that promise came true.  With the 2011 summer release of JDK 7, the Java ecosystem now had a version brought up to date.  And then in the fall of 2011, JavaFX 2.0 righted the JavaFX ship making rich internet applications a reality. Similar progress between Oracle and the Java community continues to blossom.  New-found relationship investments between Oracle and Java User Groups are taking root.  Greater participation and content execution by the Java community in JavaOne is steadily increasing.  The road ahead is lit with bright lights and opportunities. And now there's more good news to share.  As of April 2nd, two recognized JavaFX technology luminaries and "rock stars" speakers from the Java community are joining Oracle on a new journey. We're proud to have both Jim Weaver and Stephen Chin joining Oracle's Java Evangelist Team.  You'll start to see them involved in many community facing activities where their JavaFX expertise and passion will shine.  Stay tuned! Welcome @JavaFXpert and @SteveonJava !

    Read the article

  • Story of success: MySQL Enterprise Backup (MEB) was successfully integrated with IBM Tivoli Storage Manager (TSM) via System Backup to Tape (SBT) interface.

    - by user13334359
    Since version 3.6 MEB supports backups to tape through the SBT interface.The officially supported tool for such backups to tape is Oracle Secure Backup (OSB).But there are a lot of other Storage Managers. MEB allows to use them through the SBT interface. Since version 3.7 it also has option --sbt-environment which allows to pass environment variables, not needed by OSB, to third-party managers. At the same time MEB can not guarantee it would work with all of them.This month we were contacted by a customer who wanted to use IBM Tivoli Storage Manager (TSM) with MEB. We could only say them same thing I wrote in previous paragraph: this solution is supposed to work, but you have to be pioneers of this technology. And they agreed. They agreed to be the pioneers and so the story begins.MEB requires following options to be specified by those who want to connect it to SBT interface:--sbt-database-name: a name which should be handed over to SBT interface. This can be any name. Default, MySQL, works for most cases, so user is not required to specify this option.--sbt-lib-path: path to SBT library. For TSM this library comes with "Data Protection for Oracle", which, in its turn, interfaces with Oracle Recovery Manager (RMAN), which uses SBT interface. So you need to install it even if you don't use Oracle.--sbt-environment: environment for third-party manager. This option is not needed when you use OSB, but almost always necessary for third-party SBT managers. TSM requires variable TDPO_OPTFILE to be set and point to the TSM configuration file.--backup-image=sbt:: path to the image. Prefix "sbt:" indicates that image should be sent through SBT interfaceSo full command in our case would look like: ./mysqlbackup --port=3307 --protocol=tcp --user=backup_user --password=foobar \ --backup-image=sbt:my-first-backup --sbt-lib-path=/usr/lib/libobk.so \ --sbt-environment="TDPO_OPTFILE=/path/to/my/tdpo.opt" --backup-dir=/path/to/my/dir backup-to-imageAnd this command results in the following output log: MySQL Enterprise Backup version 3.7.1 [2012/02/16] Copyright (c) 2003, 2012, Oracle and/or its affiliates. All Rights Reserved. INFO: Starting with following command line ...  ./mysqlbackup --port=3307 --protocol=tcp --user=backup_user         --password=foobar --backup-image=sbt:my-first-backup         --sbt-lib-path=/usr/lib/libobk.so         --sbt-environment="TDPO_OPTFILE=/path/to/my/tdpo.opt"         --backup-dir=/path/to/my/dir backup-to-image sbt-environment: 'TDPO_OPTFILE=/path/to/my/tdpo.opt' INFO: Got some server configuration information from running server. IMPORTANT: Please check that mysqlbackup run completes successfully.             At the end of a successful 'backup-to-image' run mysqlbackup             prints "mysqlbackup completed OK!". --------------------------------------------------------------------                        Server Repository Options: --------------------------------------------------------------------   datadir                          =  /path/to/data   innodb_data_home_dir             =  /path/to/data   innodb_data_file_path            =  ibdata1:2048M;ibdata2:2048M;ibdata3:64M:autoextend:max:2048M   innodb_log_group_home_dir        =  /path/to/data   innodb_log_files_in_group        =  2   innodb_log_file_size             =  268435456 --------------------------------------------------------------------                        Backup Config Options: --------------------------------------------------------------------   datadir                          =  /path/to/my/dir/datadir   innodb_data_home_dir             =  /path/to/my/dir/datadir   innodb_data_file_path            =  ibdata1:2048M;ibdata2:2048M;ibdata3:64M:autoextend:max:2048M   innodb_log_group_home_dir        =  /path/to/my/dir/datadir   innodb_log_files_in_group        =  2   innodb_log_file_size             =  268435456 Backup Image Path= sbt:my-first-backup mysqlbackup: INFO: Unique generated backup id for this is 13297406400663200 120220 08:54:00 mysqlbackup: INFO: meb_sbt_session_open: MMS is 'Data Protection for Oracle: version 5.5.1.0' 120220 08:54:00 mysqlbackup: INFO: meb_sbt_session_open: MMS version '5.5.1.0' mysqlbackup: INFO: Uses posix_fadvise() for performance optimization. mysqlbackup: INFO: System tablespace file format is Antelope. mysqlbackup: INFO: Found checkpoint at lsn 31668381. mysqlbackup: INFO: Starting log scan from lsn 31668224. 120220  8:54:00 mysqlbackup: INFO: Copying log... 120220  8:54:00 mysqlbackup: INFO: Log copied, lsn 31668381.           We wait 1 second before starting copying the data files... 120220  8:54:01 mysqlbackup: INFO: Copying /path/to/ibdata/ibdata1 (Antelope file format). mysqlbackup: Progress in MB: 200 400 600 800 1000 1200 1400 1600 1800 2000 120220  8:55:30 mysqlbackup: INFO: Copying /path/to/ibdata/ibdata2 (Antelope file format). mysqlbackup: Progress in MB: 200 400 600 800 1000 1200 1400 1600 1800 2000 120220  8:57:18 mysqlbackup: INFO: Copying /path/to/ibdata/ibdata3 (Antelope file format). mysqlbackup: INFO: Preparing to lock tables: Connected to mysqld server. 120220 08:57:22 mysqlbackup: INFO: Starting to lock all the tables.... 120220 08:57:22 mysqlbackup: INFO: All tables are locked and flushed to disk mysqlbackup: INFO: Opening backup source directory '/path/to/data/' 120220 08:57:22 mysqlbackup: INFO: Starting to backup all files in subdirectories of '/path/to/data/' mysqlbackup: INFO: Backing up the database directory 'mysql' mysqlbackup: INFO: Backing up the database directory 'test' mysqlbackup: INFO: Copying innodb data and logs during final stage ... mysqlbackup: INFO: A copied database page was modified at 31668381.           (This is the highest lsn found on page)           Scanned log up to lsn 31670396.           Was able to parse the log up to lsn 31670396.           Maximum page number for a log record 328 120220 08:57:23 mysqlbackup: INFO: All tables unlocked mysqlbackup: INFO: All MySQL tables were locked for 0.000 seconds 120220 08:59:01 mysqlbackup: INFO: meb_sbt_backup_close: blocks: 4162  size: 1048576  bytes: 4363985063 120220  8:59:01 mysqlbackup: INFO: Full backup completed! mysqlbackup: INFO: MySQL binlog position: filename bin_mysql.001453, position 2105 mysqlbackup: WARNING: backup-image already closed mysqlbackup: INFO: Backup image created successfully.:            Image Path: 'sbt:my-first-backup' -------------------------------------------------------------    Parameters Summary -------------------------------------------------------------    Start LSN                  : 31668224    End LSN                    : 31670396 ------------------------------------------------------------- mysqlbackup completed OK!Backup successfully completed.To restore it you should use same commands like you do for any other MEB image, but need to provide sbt* options as well: $./mysqlbackup --backup-image=sbt:my-first-backup --sbt-lib-path=/usr/lib/libobk.so \ --sbt-environment="TDPO_OPTFILE=/path/to/my/tdpo.opt" --backup-dir=/path/to/my/dir image-to-backup-dirThen apply log as usual: $./mysqlbackup --backup-dir=/path/to/my/dir apply-logThen stop mysqld and finally copy-back: $./mysqlbackup --defaults-file=path/to/my.cnf --backup-dir=/path/to/my/dir copy-back  Disclaimer. This is only story of one success which can be useful for someone else. MEB is not regularly tested and not guaranteed to work with IBM TSM or any other third-party storage manager.

    Read the article

  • Bluetooth mouse Logitech M555b not recognized on a Macbook Pro 8.2

    - by Pierre
    I just bought a Logitech M555b Bluetooth mouse today. My computer (a Macbook Pro 8,2) has a bluetooth connection, and it works perfectly fine on Mac OSX (I set up a new device, click the connect button on the mouse, and voilà, done in 5 seconds). I cannot get it to work on Ubuntu 12.04. When I follow the same steps as on Mac OSX under Ubuntu 12.04, Sometimes I will see the mouse name appearing for 1 second on the screen, and if I'm fast enough to click on "Continue", I will have a message telling me the pairing with my mouse failed. Here are the steps I follow: Turn off Bluetooth on Ubuntu Turn Bluetooth on. Bluetooth icon Set up new device... Turn on the mouse, and press the Connect button. Press "Continue" on the Bluetooth New Device Setup screen I see the mouse name for one second, but then it disappear. Click on PIN options... to select '0000'. ... nothing, Ubuntu won't see anything. In the /var/log/syslog, I have the following information: May 27 23:26:16 Trane bluetoothd[896]: HCI dev 0 down May 27 23:26:16 Trane bluetoothd[896]: Adapter /org/bluez/896/hci0 has been disabled May 27 23:26:58 Trane bluetoothd[896]: HCI dev 0 up May 27 23:26:58 Trane bluetoothd[896]: Adapter /org/bluez/896/hci0 has been enabled May 27 23:26:58 Trane bluetoothd[896]: Inquiry Cancel Failed with status 0x12 May 27 23:28:26 Trane bluetoothd[896]: Refusing input device connect: No such file or directory (2) May 27 23:28:26 Trane bluetoothd[896]: Refusing connection from 00:1F:20:24:15:3D: setup in progress May 27 23:28:26 Trane bluetoothd[896]: Discovery session 0x7f3409ab5b70 with :1.84 activated I tried for an hour, I restarted my computer, re-paired the mouse on Mac OSX, etc. Sometimes when I restart the computer, I have a dialog box showing up saying that the mouse M555b wants to access some resources or something, if I click "Grant", or even if I tick "always grant access", nothing happens... How can I get the Bluetooth to work properly? Help!

    Read the article

  • What are the best strategies for selling Android apps?

    - by Rob S.
    I'm a young developer hoping to sell my apps I made for Android soon. My applications are basically 99% finished so I'm investigating what would be the best marketing strategy to use to sell my apps. I'm sure the brilliant minds here can give me some great advice. I'm particularly interested in your thoughts on the following points (especially from experienced Android developers): Is it more profitable to sell an app for free with ads or to sell an app without ads for a price? Perhaps a combination of a free ad version and a paid ad-free version? If you give away an app for free with ads on it is it ethical to decline bending over backwards to support it? How much does piracy actually affect potential sales? Should any effort be put towards preventing it? Can you still make a profit off your application if you make it open source? Could you perhaps make more of a profit from the attention you would get by doing so? Is Google's Android Marketplace really the best place to release Android apps? It is worthwhile enough to maintain a developer blog or website to keep users updated on your development progress and software releases? Any other suggestions you could give me to maximize profit meanwhile keeping users happy and coming back for more would also be greatly appreciated. While I appreciate general tips and tricks, I'd like to ask that if possible you please go the extra step and show how they specifically apply to selling Android apps. Marketing statistics, developer retrospect, and any additional experience you can share from your time selling Android apps is what I would love to see most. Thank you very much in advance for your time. I truly appreciate all the responses I receive.

    Read the article

  • Looking for a simple web interface with subversion support and ticket /issue tracker [closed]

    - by Stefan Andre Brannfjell
    I am working on a small project and we have a few programmers on the job. We are using subversion to commit updates and keep all developers up to date on their workstations. However, we have yet to find a suitable web interface to use for it. I have tried redmine, but that installation progress was extremely bothersome and advanced. Once I got it to work I found out that it was slow and did not meet my expectations. As well as it seems a bit complex for our needs. I would prefer to find a solution that supports lighttpd web server, however that seem to be very hard to come by, those I have found seem to only have apache support. Functionality i wish for the website: - login to an svn account - view svn logs - view & create issues, todo list etc - view svn difference Do you have any open source recommendations that I can try out? I will appreciate any kind of reply. :) Edit: I wish to host the website on our own servers.

    Read the article

  • responsibility for storage

    - by Stefano Borini
    A colleague and I were brainstorming about where to put the responsibility of an object to store itself on the disk in our own file format. There are basically two choices: object.store(file) fileformatWriter.store(object) The first one gives the responsibility of serialization on the disk to the object itself. This is similar to the approach used by python pickle. The second groups the representation responsibility on a file format writer object. The data object is just a plain data container (eventually with additional methods not relevant for storage). We agreed on the second methodology, because it centralizes the writing logic from generic data. We also have cases of objects implementing complex logic that need to store info while the logic is in progress. For these cases, the fileformatwriter object can be passed and used as a delegate, calling storage operations on it. With the first pattern, the complex logic object would instead accept the raw file, and implement the writing logic itself. The first method, however, has the advantage that the object knows how to write and read itself from any file containing it, which may also be convenient. I would like to hear your opinion before starting a rather complex refactoring.

    Read the article

  • What are the maths behind 'Raiden 2' purple laser?

    - by Aybe
    The path of the laser is affected by user input and enemies present on the screen. Here is a video, at 5:00 minutes the laser in question is shown : Raiden II (PS) - 1 Loop Clear - Part 2 UPDATE Here is a test using Inkscape, ship is at bottom, the first 4 enemies are targeted by the plasma. There seems to be a sort of pattern. I moved the ship first, then the handle from it to form a 45° angle, then while trying to fit the curve I found a pattern of parallel handles and continued so until I reached the last enemy. Update, 5/26/2012 : I started an XNA project using beziers, there is still some work needed, will update the question next week. Stay tuned ! Update : 5/30/2012 : It really seems that they are using Bézier curves, I think I will be able to replicate/imitate a plasma of such grade. There are two new topics I discovered since last time : Arc length, Runge's phenomenon, first one should help in having a linear movement possible over a Bézier curve, second should help in optimizing the number of vertices. Next time I will put a video so you can see the progress 8-)

    Read the article

  • Updating an interface to bootstrap

    - by Anagio
    I'm updating a web apps interface to bootstrap. There's a lot of existing CSS and Javascript/jQuery i'll have to migrate, most i'll scrap and use bootstraps. But for parts of the app that use datatables and such all that code has to be migrated. I'm working on a development server. The app has a header.phtml sidebar.phtml and a lot of content area view files. Right now i'm building static versions of view files say the header. I then open my existing header.phtml into notepad++ split screen with the static file and copy over the dynamic code. Then replace the old header.phtml with the one I just made. To make sure the header displayed correctly I had to add all the CSS and JS from bootstrap. This is conflicting with the current CSS styles and some JS conflicts as well. Should I go through the app note what JS I absolutely need what I don't and same with the CSS. Then strip all the CSS/JS from the old app that is not needed so it only has bootstraps and any other critical files and not worry about the way pages look as i'm making progress to updating them. I'd be working on mostly a wireframe of the old site without any styles until I get to applying bootstraps. Is this efficient or is there another way I can get through all these files and update them easily?

    Read the article

  • How to drastically improve code coverage?

    - by Peter Kofler
    I'm tasked with getting a legacy application under unit test. First some background about the application: It's a 600k LOC Java RCP code base with these major problems massive code duplication no encapsulation, most private data is accessible from outside, some of the business data also made singletons so it's not just changeable from outside but also from everywhere. no business model, business data is stored in Object[] and double[][], so no OO. There is a good regression test suite and an efficient QA team is testing and finding bugs. I know the techniques how to get it under test from classic books, e.g. Michael Feathers, but that's too slow. As there is a working regression test system I'm not afraid to aggressively refactor the system to allow unit tests to be written. How should I start to attack the problem to get some coverage quickly, so I'm able to show progress to management (and in fact to start earning from safety net of JUnit tests)? I do not want to employ tools to generate regression test suites, e.g. AgitarOne, because these tests do not test if something is correct.

    Read the article

  • Marshalling the value of a char* ANSI string DLL API parameter into a C# string

    - by Brian Biales
    For those who do not mix .NET C# code with legacy DLL's that use char* pointers on a regular basis, the process to convert the strings one way or the other is non-obvious. This is not a comprehensive article on the topic at all, but rather an example of something that took me some time to go find, maybe it will save someone else the time. I am utilizing a third party too that uses a call back function to inform my application of its progress.  This callback includes a pointer that under some circumstances is a pointer to an ANSI character string.  I just need to marshal it into a C# string variable.  Seems pretty simple, yes?  Well, it is, (as are most things, once you know how to do them). The parameter of my callback function is of type IntPtr, which implies it is an integer representation of a pointer.  If I know the pointer is pointing to a simple ANSI string, here is a simple static method to copy it to a C# string: private static string GetStringFromCharStar(IntPtr ptr) {     return System.Runtime.InteropServices.Marshal.PtrToStringAnsi(ptr); } The System.Runtime.InteropServices is where to look any time you are mixing legacy unmanaged code with your .NET application.

    Read the article

  • Design pattern for an automated mechanical test bench

    - by JJS
    Background I have a test fixture with a number of communication/data acquisition devices on it that is used as an end of line test for a product. Because of all the various sensors used in the bench and the need to run the test procedure in near real-time, I'm having a hard time structuring the program to be more friendly to modify later on. For example, a National Instruments USB data acquisition device is used to control an analog output (load) and monitor an analog input (current), a digital scale with a serial data interface measures position, an air pressure gauge with a different serial data interface, and the product is interfaced through a proprietary DLL that handles its own serial communication. The hard part The "real-time" aspect of the program is my biggest tripping point. For example, I need to time how long the product needs to go from position 0 to position 10,000 to the tenth of a second. While it's traveling, I need to ramp up an output of the NI DAQ when it reaches position 6,000 and ramp it down when it reaches position 8,000. This sort of control looks easy from browsing NI's LabVIEW docs but I'm stuck with C# for now. All external communication is done by polling which makes for lots of annoying loops. I've slapped together a loose Producer Consumer model where the Producer thread loops through reading the sensors and sets the outputs. The Consumer thread executes functions containing timed loops that poll the Producer for current data and execute movement commands as required. The UI thread polls both threads for updating some gauges indicating current test progress. Unsure where to start Is there a more appropriate pattern for this type of application? Are there any good resources for writing control loops in software (non-LabVIEW) that interface with external sensors and whatnot?

    Read the article

  • How to hire support people?

    - by Martin
    I manage a tech support team at a mid-sized software company. We are the last line of support, so issues that we can't fix need to be escalated to the development team. When I joined the company, our team wasn't capable of much beyond using a specific set of troubleshooting steps to solve known issues and escalating anything else to the developers. It's always been a goal of mine for our team to shoulder as much of the support burden as possible without ever bothering a developer. Over the past few years, I, along with several new hires I've made, have made pretty good progress in that direction. We've coded our own troubleshooting tools which now ship with several of our products. When users have never-before-seen issues, we analyze stack traces and troubleshoot down to the code level, and if we need to submit a bug, half the time we've already identified in the code where in the code the bug is and offered a patch to fix it. Here's the problem I've always had: finding support people capable of the work I've described above is really difficult. I've hired 3 people in the past 3 years, and I've probably looked at several thousand resumes and conducted several hundred phone screens to do so. I know it's pretty well accepted that hiring good people is tough in the tech industry, but it seems that support is especially difficult -- there are clearly thousands of people walking around calling themselves support analysts, but 99%+ of them seemingly aren't capable of anything beyond reading a script. I'm curious if anyone has experience recruiting the sort of folks I'm talking about, and if you have any suggestions to share. We've tried all sorts of things -- different job titles/descriptions, using headhunters, etc. And while we've managed to hire a few good folks, it's basically taken us a year to find an appropriate candidate for each opening we've had, and I can't help but wonder if there's something we could be doing differently.

    Read the article

  • Opportunities in Development in our Swedish office

    - by anca.rosu
    Hi everyone, my name is Henrik and I joined the JRockit group in 2004. Before that my background was Microsoft, as both a Test Competence lead and as a Program Manager. As an Engineering Manager at Oracle I lead a team of 11 developers. I focus on people management and the daily operations of the department with a heavy focus on interaction and dependencies between the groups and departments here at the Stockholm development site. I also make sure my team deliver on our commitments. I would like to give you a brief summary of the Oracle JRockit team: -The development group in Stockholm delivers several products for the Oracle Fusion Middleware stack. Our main products are JRockitVE which allows you to run a Java Virtual Machine without an operating system, the JRockit Java Virtual Machine which is the default jvm for all Oracle middleware products, and the JRockit MissionControl, a set of tools that allows developers to monitor their applications at runtime and perform advanced latency analysis as well as in-production memory leak detection etc. -The office has several departments focusing on different aspects of the product development process, not only to build features and test them but everything from building the infrastructure needed to automatically build and test the products to sustaining engineering that tracks down bugs in customer systems and provide them with patches. Some inspirational lines around what the Oracle JRockit group can offer you in terms of progress, development and learning: - It is a unique chance to get insight and experience building enterprise class software for one of the worlds largest software companies. Here there are almost unlimited possibilities for the right candidate to learn about silicon features and how to implement support for this in software, and to compile optimizations. The position will also give insight into the processes needed to produce software at this level in the industry. If you have any questions related to this article feel free to contact  [email protected].  You can find our job opportunities via http://campus.oracle.com. Technorati Tags: Development,Sweden,Jrockit,Java,Virtual Machine,Oracle Fusion Middleware,software

    Read the article

  • Sample Browser Visual Studio Extension is localized and introduced to Japan

    - by Jialiang
    http://blogs.msdn.com/b/codefx/archive/2012/10/14/sample-browser-visual-studio-extension-is-localized-and-introduced-to-japan.aspx  ??????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????From: Japan MVP   "The Sample Browser is very easy to use thanks to the refined interface.  The categorized menu enables faster search. Highly acclaimed.  But it need localization. It may not be a problem for those who can understand English, but I think localizing Sample Browser into Japanese will promote its use in Japan further." This is a prominent feedback collected from the Japan MVP community since we released the last version of Sample Browser, which was only available in English.  Japan developers like the Sample Browser, but they want localized code samples, localized Sample Browser UI, and the localized search experience.  The Japan MVP lead, Satoru Kitabata, observed these needs and expectations.  He started to engage with all local developer MVPs to translate the UI elements in the Sample Browser.  Lots of MVPs signed up to participate in this work.  They had roundtables and newsletters to track the progress.  In short three weeks, every control, every tooltip, every font on every label, was beautifully tuned for Japanese.  The sample search experience was also optimized for Japan developers - they can directly type Japanese query to search for code samples.  Together with Microsoft Japan MVPs, the sample use experience is localized and improved to a new level!    The Japan MVP Lead, Satoru Kitabata, further worked with MSDN Japan site manager and Japan DPE to introduce the good news of localized Sample Browser to Japan Sample Browser  http://msdn.microsoft.com/ja-jp/jj730399 Sample Browser?????? http://msdn.microsoft.com/ja-jp/jj730398     Thanks to the joint effort and Japan MVPs’ feedback and contributions, the Sample Browser gets the chance to benefit the broader Japan developer audience.

    Read the article

  • CodePlex Daily Summary for Friday, June 06, 2014

    CodePlex Daily Summary for Friday, June 06, 2014Popular ReleasesVirto Commerce Enterprise Open Source eCommerce Platform (asp.net mvc): Virto Commerce 1.10: Virto Commerce Community Edition version 1.10. To install the SDK package, please refer to SDK getting started documentation To configure source code package, please refer to Source code getting started documentation This release includes bug fixes and improvements (including Commerce Manager localization and https support). More details about this release can be found on our blog at http://blog.virtocommerce.com.Load Runner - HTTP Pressure Test Tool: Load Runner 1.3: 1. gracefully stop distributed engine 2. added error handling for distributed load tasks if fail to send the tasks. 3. added easter egg ;-) NPOI: NPOI 2.1: Assembly Version: 2.1.0 New Features a. XSSFSheet.CopySheet b. Excel2Html for XSSF c. insert picture in word 2007 d. Implement IfError function in formula engine Bug Fixes a. fix conditional formatting issue b. fix ctFont order issue c. fix vertical alignment issue in XSSF d. add IndexedColors to NPOI.SS.UserModel e. fix decimal point issue in non-English culture f. fix SetMargin issue in XSSF g.fix multiple images insert issue in XSSF h.fix rich text style missing issue in XSSF i. fix cell...HigLabo: HigLabo_20140605: Modify AsciiCharOnly method implementation. Code clean up of RssItem.Description.51Degrees - Device Detection and Redirection: 3.1.2.3: Version 3.1 HighlightsDevice detection algorithm is over 100 times faster. Regular expressions and levenshtein distance calculations are no longer used. The device detection algorithm performance is no longer limited by the number of device combinations contained in the dataset. Two modes of operation are available: Memory – the detection data set is loaded into memory and there is no continuous connection to the source data file. Slower initialisation time but faster detection performanc...CS-Script for Notepad++ (C# intellisense and code execution): Release v1.0.27.0: CodeMap now indicates the type name for all members Implemented running scripts 'as administrator'. Just add '//css_npp asadmin' to the script and run it as usual. 'Prepare script for distribution' now aggregates script dependency assemblies. Various improvements in CodeSnipptet, Autcompletion and MethodInfo interactions with each other. Added printing line number for the entries in CodeMap (subject of configuration value) Improved debugging step indication for classless scripts ...Kartris E-commerce: Kartris v2.6003: Bug fixes: Fixed issue where category could not be saved if parent categories not altered Updated split string function to 500 chars otherwise problems with attribute filtering in PowerPack Improvements: If a user has group pricing below that available through qty discounts, that will show in place of the qty discountClosedXML - The easy way to OpenXML: ClosedXML 0.72.3: 70426e13c415 ClosedXML for .Net 4.0 now uses Open XML SDK 2.5 b9ef53a6654f Merge branch 'master' of https://git01.codeplex.com/forks/vbjay/closedxml 727714e86416 Fix range.Merge(Boolean) for .Net 3.5 eb1ed478e50e Make public range.Merge(Boolean checkIntersects) 6284cf3c3991 More performance improvements when saving.SEToolbox: SEToolbox 01.032.018 Release 1: Added ability to merge/join two ships, regardless of origin. Added Language selection menu to set display text language (for SE resources only), and fixed inherent issues. Added full support for Dedicated Servers, allowing use on PC without Steam present, and only the Dedicated Server install. Added Browse button for used specified save game selection in Load dialog. Installation of this version will replace older version.DNN Blog: 06.00.07: Highlights: Enhancement: Option to show management panel on child modules Fix: Changed SPROC and code to ensure the right people are notified of pending comments. (CP-24018) Fix: Fix to have notification actions authentication assume the right module id so these will work from the messaging center (CP-24019) Fix: Fix to issue in categories in a post that will not save when no categories and no tags selectedTEncoder: 4.0.0: --4.0.0 -Added: Video downloader -Added: Total progress will be updated more smoothly -Added: MP4Box progress will be shown -Added: A tool to create gif image from video -Added: An option to disable trimming -Added: Audio track option won't be used for mpeg sources as default -Fixed: Subtitle position wasn't used -Fixed: Duration info in the file list wasn't updated after trimming -Updated: FFMpegVeraCrypt: VeraCrypt version 1.0d: Changes between 1.0c and 1.0d (03 June 2014) : Correct issue while creating hidden operating system. Minor fixes (look at git history for more details).Microsoft Web Protection Library: AntiXss Library 4.3.0: Download from nuget or the Microsoft Download Center This issue finally addresses the over zealous behaviour of the HTML Sanitizer which should now function as expected once again. HTML encoding has been changed to safelist a few more characters for webforms compatibility. This will be the last version of AntiXSS that contains a sanitizer. Any new releases will be encoding libraries only. We recommend you explore other sanitizer options, for example AntiSamy https://www.owasp.org/index....Z SqlBulkCopy Extensions: SqlBulkCopy Extensions 1.0.0: SqlBulkCopy Extensions provide MUST-HAVE methods with outstanding performance missing from the SqlBulkCopy class like Delete, Update, Merge, Upsert. Compatible with .NET 2.0, SQL Server 2000, SQL Azure and more! Bulk MethodsBulkDelete BulkInsert BulkMerge BulkUpdate BulkUpsert Utility MethodsGetSqlConnection GetSqlTransaction You like this library? Find out how and why you should support Z Project Become a Memberhttp://zzzproject.com/resources/images/all/become-a-member.png|ht...Tweetinvi a friendly Twitter C# API: Tweetinvi 0.9.3.x: Timelines- Added all the parameters available from the Timeline Endpoints in Tweetinvi. - This is available for HomeTimeline, UserTimeline, MentionsTimeline // Simple query var tweets = Timeline.GetHomeTimeline(); // Create a parameter for queries with specific parameters var timelineParameter = Timeline.CreateHomeTimelineRequestParameter(); timelineParameter.ExcludeReplies = true; timelineParameter.TrimUser = true; var tweets = Timeline.GetHomeTimeline(timelineParameter); Tweetinvi 0.9.3.1...Sandcastle Help File Builder: Help File Builder and Tools v2014.5.31.0: General InformationIMPORTANT: On some systems, the content of the ZIP file is blocked and the installer may fail to run. Before extracting it, right click on the ZIP file, select Properties, and click on the Unblock button if it is present in the lower right corner of the General tab in the properties dialog. This release completes removal of the branding transformations and implements the new VS2013 presentation style that utilizes the new lightweight website format. Several breaking cha...Image View Slider: Image View Slider: This is a .NET component. We create this using VB.NET. Here you can use an Image Viewer with several properties to your application form. We wish somebody to improve freely. Try this out! Author : Steven Renaldo Antony Yustinus Arjuna Purnama Putra Andre Wijaya P Martin Lidau PBK GENAP 2014 - TI UKDWAspose for Apache POI: Missing Features of Apache POI WP - v 1.1: Release contain the Missing Features in Apache POI WP SDK in Comparison with Aspose.Words for dealing with Microsoft Word. What's New ?Following Examples: Insert Picture in Word Document Insert Comments Set Page Borders Mail Merge from XML Data Source Moving the Cursor Feedback and Suggestions Many more examples are yet to come here. Keep visiting us. Raise your queries and suggest more examples via Aspose Forums or via this social coding site.babelua: 1.5.6.0: V1.5.6.0 - 2014.5.30New feature: support quick-cocos2d-x project now; support text search in scripts folder now, you can use this function in Search Result Window;Credit Component: Credit Component: This is a sample release of Credit Component that has been made by Microsoft Visual Studio 2010. To try and use it, you need .NET framework 4.0 and Microsoft Visual Studio 2010 or newer as a minimum requirement in this download you will get media player as a sample application that use this component credit component as a main component media player source code as source code and sample usage of credit component credit component source code as source code of credit component important...New ProjectsBack Up Your SharePoint: SPSBackUp is a PowerShell script tool to Backup your SharePoint environment. It's compatible with SharePoint 2010 & 2013.ChoMy: mrkidconsoledemo: a basic app about the knowledge in c# domainDecision Architect Server Client in C#: This project is a client for Decision Architect Server API.InnVIDEO365 Agile SharePoint-Kaltura App: InnVIDEO365 Agile SharePoint - Kaltura App for SharePoint Online is a complete video content solution for Microsoft SharePoint Online.JacOS: JacOS is an open-source operating system created with COSMOSKISS Workflow Foundation: This project was born from the idea that everyone should be able to use Workflow Foundation in best conditions. It contain sample to take WF jump start!Media Player (Technology Area): A now simple, yet somehow complex music, photo viewer I have some plans for the best tools for the player in the futureMetalurgicaInstitucional: MetalurgicaInstitucionalmoodlin8: Moodlin8 try to exploit significant parts of Moodle, the popular LMS, in a mobile or fixed environment based on the Modern Interface of Windows 8.1.NewsletterSystem: This is a test projectpesho2: Ala bala portokalaProyecto web Lab4 Gioia lucas: proyecto para lab4 utn pachecoRallyRacer: TSD Rally Compute helperVideo-JS Webpart for Sharepoint: Video-JS Webpart for SharepointYanZhiwei_CSharp_UtilHelp: ??C#???????????API: ????? Web API

    Read the article

  • Preventing possible burnout in a junior dev, or perhaps I'm not doing enough?

    - by m.edmondson
    I'm a software developer with 5 years experience over 3 companies. Within the last year a junior (brand new to the industry) has started at my current employer. I believe he is an excellent developer, who always delivers and is skilled as solving complex problems. However I'm slightly concerned that he is possibly applying himself too much for the following reasons: He begins work approximately 2 hours before most (and is expected) In his free time he has developed an application that was clearly months worth of work that is specific to our employer I and the team are completely greatful for all he is doing, and is clearly an asset to our team. However I'm worried that this is not sustainable. I can almost see that he has the same enthusiasm that I had when I began coding for work, however over the years I've realised that extra curricular work not only doesn't progress your career, but eats into your all important free time. The question I'm asking is: Should I advise him to take things a bit more slowly? Or perhaps I need to learn from him and do more for my employer out of hours?

    Read the article

  • Music While Coding [closed]

    - by inspectorG4dget
    Hi SO, Generally, while I'm coding, I prefer to listen to some background music. Nothing that'll get me distracted, but something that'll help keep the rhythm and isn't counterproductive when I need to stop coding to debug or to think of a way to solve a small problem that stands in the way of progress. Now, I have read some similar questions on reddit and on SO - specifically: which songs do you find most productive to listen to while coding, Music while programming and more. Sadly a lot of these questions were closed as off-topic, etc. But (1) I don't think this question is off-topic and I think that a lot of programmers can benefit from it. (2) It's a real question. I really want to know what music you guys would recommend because music helps when I'm coding. It's sad that SO: Music to listen to while coding cannot be found and this isn't of much help. I hope this doesn't get closed. PS: I want to turn this into a community wiki, but I don't seem to know how. I'd appreciate any help. Thank you, all. In response to kirk.burleson's comment: In case the question isn't already clear, I'm asking for recommendations/opinions of music to listen to while coding. I would like to know what you listen to when you code so that I can try it too. I am running out of good "coding music" and this is a problem for me because good "coding music" helps me code better.

    Read the article

  • How Service Component Architecture (SCA) Can Be Incorporated Into Existing Enterprise Systems

    After viewing Rob High’s presentation “The SOA Component Model” hosted on InfoQ.com, I can foresee how Service Component Architecture (SCA) can be incorporated in to an existing enterprise. According to IBM’s DeveloperWorks website, SCA is a set of conditions which outline a model for constructing applications/systems using a Service-Oriented Architecture (SOA). In addition, SCA builds on open standards such as Web services. In the future, I can easily see how some large IT shops could potently divide development teams or work groups up into Component/Data Object Groups, and Standard Development Groups. The Component/Data Object Group would only work on creating and maintaining components that are reused throughout the entire enterprise. The Standard Development Group would work on new and existing projects that incorporate the use of various components to accomplish various business tasks. In my opinion the incorporation of SCA in to any IT department will initially slow down the number of new features developed due to the time needed to create the new and loosely-coupled components. However once a company becomes more mature in its SCA process then the number of program features developed will greatly increase. I feel this is due to the fact that the loosely-coupled components needed in order to add the new features will already be built and ready to incorporate into any new development feature request. References: BEA Systems, Cape Clear Software, IBM, Interface21, IONA Technologies PLC, Oracle, Primeton Technologies Ltd, Progress Software, Red Hat Inc., Rogue Wave Software, SAP AG, Siebel Systems, Software AG, Sun Microsystems, Sybase, TIBCO Software Inc. (2006). Service Component Architecture. Retrieved 11 27, 2011, from DeveloperWorks: http://www.ibm.com/developerworks/library/specification/ws-sca/ High, R. (2007). The SOA Component Model. Retrieved 11 26, 2011, from InfoQ: http://www.infoq.com/presentations/rob-high-sca-sdo-soa-programming-model

    Read the article

  • Pushing complete notifications to client

    - by ton.yeung
    So with cqrs, we accept that consistency is eventual. However, that doesn't mean that the user has to continually poll, or that eventual means an update has to take more then 500ms to sync. For the sake of UX, we want to at least give the illusion of consistency, or if not possible, be as transparent as possible. With that in mind, I have this setup: angularjs web client, consumes webapi restful services, sends commands to nservicebus command handlers, saves to neventstore, dispatches events to nservicebus event handlers, sends message to signalr hub, sends notifications to angularjs web client so with that setup, theoretically some initiates a request the server validates the request sends out the necessary commands In the mean time the client gets a 200 response updates the view: working on it gets message sometime later: done, here's the updated data Here's where things get interesting, each command could spawn multiple events. Not sure if this is a serious no, no, or not, but that's how it is currently. For example, a new customer spawns CustomerIDCreated, CustomerNameUpdated, CustomerAddressUpdated, etc... Which event handler needs to notify the client? Should all of them in a progress bar style update?

    Read the article

< Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >