Search Results

Search found 25877 results on 1036 pages for 'information driven'.

Page 773/1036 | < Previous Page | 769 770 771 772 773 774 775 776 777 778 779 780  | Next Page >

  • Summary of our Recent Pull Request Enhancements on CodePlex

    Over the past several weeks, we’ve been incrementally rolling out a bunch of enhancements around our pull request workflow for Git and Mercurial projects. Our goal is to make contributing to open source projects a simple and rewarding experience, and we’ll continue to invest in this area. Here’s a summary of the changes so far, in case you’ve missed them. As always, if you have any feedback, please let us know, whether on our ideas page or via Twitter. Support for branches You can now pick the source and destination branches for your pull request, whether you’re sending one from your fork, or using it within a project to collaborate with your other trusted contributors. A redesigned creation experience Our old pull request creation form was rather lacking. It asked for a title and comment in a small modal dialog, but that was about it. We knew we could do better, so we rethought the experience. Now, when you create a pull request, you’re taken to a new page that let’s you select the source and destination, and gives you information on the diffs and commits that you’re sending, so you can confirm that you’re sending the right set of changes. Inline code snippets in discussion If users comment on code in your pull request, we now display a preview of the snippet of relevant code inline with their comment on the discussion. Subsequent replies on that line are combined in a single thread to preserve your context. No more clicking and hunting to find where the comments are. And you can add another inline comment right from the discussion area. Comment notifications You can now elect receive an e-mail notification if a user comments on your pull request. If it’s on a line of code, we’ll display the relevant code snippet in the e-mail. Redesigned diff viewer Our old diff viewer hadn’t been touched in a while, and was in need of an update. We started with a visual facelift to use standard red/green colors for additions/deletions and remove the noisy “dots” that represented spaces and that littered the diff viewer. Based on feedback that the viewable region for diffs was too small, especially for smaller screen resolutions, we revamped the way the viewport for the code is sized, and now expand it to fill the majority of the browser height when scrolling down. The set of improvements we implemented here also apply anywhere diffs are viewed, not just for pull requests.

    Read the article

  • OSB unit testing, part 1 by Qualogy

    - by JuergenKress
    First you need to implement the simple bpel process like this : In my current project, I inherited a lot of OSB components that have been developed by (former) team members, but they all lack unit tests. This is a situation I really dislike, since this makes it much harder to refactor or bug-fix the existing code base. So, for all newly created components (and components I have to bug-fix) I strive to add unit tests. Of course, the unit tests will be created using my favourite testing tool: soapUI ! Unit of test The unit test should be created for the service composition, which in OSB terms should be the proxy service combination with its business service. Now, since you do not want to rely on any other services, you should provide mock services for all services invoked from your Component-Under-Test. In a previous article, I wrote about mocking your services in soapUI. While this approach would also be valid here, creating a mock service (and certainly deploying it on a separate WebServer) does violate one of the core principles of unit testing: to make your unit tests as self-contained as possible, i.e. not depending on any external components. In this article, I will show you how to achieve this by simply providing a mock response inside your unit test. Scenario The scenario I implement for testing is a simple currency converter; the external request consists of a from and a to currency, and an amount (in currency from). The service will perform an exchange rate lookup using the WebServiceX CurrencyConverter and return a response to the caller consisting of both the source and target currencies and amounts. For the purpose of unit testing, I will implement a mock response for the exchange rate lookup. Read the complete article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: Qualogy,OSB,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • High traffic chat - how to check if there is new message and show it for all users

    - by user2633999
    I already had question about this but obviously it was not accepted very well, apparently too long when it's actually more information so you could have given me better answer. Ok, I will be much clearer now. Best possible logic to develop scalable chat in terms of stability, storing/reading messages on chat, updating chat on new message for all users etc.? I have most of this developed, the logic I think I miss is -- check if there is new message and show it for all users. I have this implemented but it crashes the site due to its traffic of 300k-400k people, so that's my main question. The chat is PHP based and uses Pusher (www.pusher.com) for instant messaging but it lacks what I need because it's more like a websocket. I'm using hardcoded files to keep messages (want to avoid database as much as possible). It's a no extension type of file, I'm sure you know. I'm getting crash with $fp = fopen(..., "w"); // pretend ... is the path and filename fwrite($fp, $msg); //hardcode the message fclose($fp); where $msg is the message itself. I'm having 1 file per message. I show last 150 messages = 150 file accesses and reads, yeah it's too much I guess. I have better logic now which I'm pursuing and that is 1 file with last 50-100 messages at all time. Sure it should be much better. How does it crash, that's the trickiest part because everything seems ordinary, believe me it is difficult to determine what exactly crashes the site, but in like 5 minutes when I try to open the site it's gone, then I put the old content without chat and is back online again. I'm having jquery post every 1 second to check if there is new message. I'm using timestamp in a special file where I keep the time last message was sent and if ((time() - time in file) <= 2) = reload last 150 messages including the last one. Too much input/output, write/read or however to say it I think is what crashes the site.

    Read the article

  • Variable number of GUI Buttons

    - by Wakaka
    I have a generic HTML5 Canvas GUI Button class and a Scene class. The Scene class has a method called createButton(), which will create a new Button with onclick parameter and store it in a list of buttons. I call createButton() for all UI buttons when initializing the Scene. Because buttons can appear and disappear very often during rendering, Scene would first deactivate all buttons (temporarily remove their onclick, onmouseover etc property) before each render frame. During rendering, the renderer would then activate the required buttons for that frame. The problem is that part of the UI requires a variable number of buttons, and their onclick, onmouseover etc properties change frequently. An example is a buffs system. The UI will list all buffs as square sprites for the current unit selected, and mousing over each square will bring up a tooltip with some information on the buff. But the number of buffs is variable thus I won't know how many buttons to create at the start. What's the best way to solve this problem? P.S. My game is in Javascript, and I know I can use HTML buttons, but would like to make my game purely Canvas-based. Create buttons on-the-fly during rendering. Thus I will only have buttons when I require them. After the render frame these buttons would be useless and removed. Create a fixed set of buttons that I'm going to assume the number of buffs per unit won't exceed. During each render frame activate the buttons accordingly and set their onmouseover property. Assign a button to each Buff instance. This sounds wrong as the buff button is a part of the GUI which can only have one unit selected. Assigning a button to every single Buff in the game seems to be overkill. Also, I would need to change the button's position every render frame since its order in the unit's list of buffs matter. Any other solutions? I'm actually quite for idea (1) but am worried about the memory/time issues of creating a new Button() object every render frame. But this is in Javascript where object creation is oh-so-common ({} mainly) due to automatic garbage collection. What is your take on this? Thanks!

    Read the article

  • How / Where can I host my Java web application? [closed]

    - by Huliax
    Possible Duplicate: How to find web hosting that meets my requirements? In case you want to skip to the crux of my inquiry just read the bold type. I just finished my CS degree (at 39 years old :-)). For my final project I designed and built a system that can provide local positioning / location awareness to mobile wifi devices (only have Android client thus far). The server receives data from clients, processes it, and responds to the clients with a messages containing information about their respective locations. I would like to continue the project (perhaps release as open source but that is a different discussion). Thus far my server application has been running on the CS department's hardware where I could pretty much do whatever I wanted. I'm getting kicked off that system in a few weeks so I have to find a new home for my server application. I need a host that will let me run my Java server (along w/ mySQL db) -- preferably on the cheap since I haven't yet got a job. I have very little experience with the "real world" of web development / hosting. I'm having trouble figuring out what kind of hosting service will let me run my application as is. If that turns out to be a tall order then I need to know what my options are for changing thing so that I can get up and running with some hosting. As an aside, I'm also researching whether or not I should rewrite this in a different language. Trying to figure out if there is a substantially better (for whatever reason) one for what I'm doing. This might also potentially have a bearing on my hosting needs. One possibility is to write the server in something more widely accepted by hosting services. I have been searching for answers to my question and haven't found quite what I'm looking for. Part of the problem might be that I don't know exactly what terminology to use. If there is a good answer to this question elsewhere please feel free to point me towards it. Thanks for help / advice.

    Read the article

  • URL slugs: ideal length, and the real SEO effects of these slugs

    - by tattvamasi
    this question is addressed widely on SO and outside it, but for some reason, instead of taking it as a good load of great advice, all this information is confusing me. ** Problem ** I already had, on one of my sites, "prettified" urls. I had taken out the query strings, rewritten the URLS, and the link was short enough for me, but had a problem: the ID of the item or post in the URL isn't good for users. One of the users asked is there's a way to get rid of numbers, and I thought it was better for users to just see a clue of the page content in the URL. ** Solution ** With this in mind, I am trying with a section of the site.Armed with 301 redirects, some parsing work, and a lot of patience, I have added the URL slugs to some blog entries, and the slug of the URL reports the title of the article (something close to http://example.com/my-news/terribly-boring-and-long-url-that-replaces-the-number-I-liked-so-much/ ** Problems after Solution ** The problem, as I see it, is that now the URL of those blog articles is very descriptive for sure, but it is also impossible to remember. So, this brings me to the same issue I had with my previous problem: if numbers say nothing and can't be remembered, what's the use of these slugs? I prefer to see http://example.com/my-news/1/ than http://example.com/my-news/terribly-boring-and-long-url-that-replaces-the-number-I-liked-so-much/ To avoid forcing my user to memorize my URLS, I have added a script that finds the closest match to the URL you type, and redirects there. This is something I like, because the page now acts as a sort of little search engine, and users can play with the URLS to find articles. ** Open questions ** I still have some open questions, and don't seem to be able to find an answer, because answers tend to contradict one another. 1) How many characters should an URL ideally be long? I've read the magic number 115 and am sticking to that, but am not sure. 2) Is this really good for SEO? One of those blog articles I have redirected, with ID number in the URL and all, ranked second on Google. I've just found this question, and the answer seems to be consistent with what I think URL slug and SEO - structure (but see this other question with the opposite opinion) 3) To make a question with a specific example, would this URL risk to be penalized? Is it acceptable? Is it too long? StackOverflow seems to have comparably long URLs, but I'm not sure it's a winning strategy in my case. I just wanted to facilitate my users without running into Google's algorithms.

    Read the article

  • Raid superblock missing on single parition. Recovery needed!

    - by user171639
    Ok so I have a 2 TB raid 1 setup that has three partitions: sdc1: linux sdc2: swap sdc3: LVM for data However the LVM will no longer mount. So I thought that I would take the first drive, mount it in linux (ive done this b4), and reset the spare drive to copy the data. Normally I can mount a single drive for data recovery using: sudo su apt-get install mdadm lvm2 mdadm --assemble --scan modprobe dm-mod vgscan vgchange -ay c mount -o ro /dev/c/c /mnt Unfortunately, vgscan doesnot recognize the data partition. It appears as though the superblock on the first drive's data partition was erased while syncing with the second. So now I cannot mount that partition and the second drive is stuck in spare mode. Any ideas? Or a way to force mount the data partition just to copy the data? knoppix@Microknoppix:~$ sudo su root@Microknoppix:/home/knoppix# apt-get install mdadm lvm2 Reading package lists... Done Building dependency tree Reading state information... Done lvm2 is already the newest version. mdadm is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 551 not upgraded. root@Microknoppix:/home/knoppix# mdadm --assemble --scan mdadm: /dev/md/1 has been started with 1 drive (out of 2). mdadm: /dev/md/0 has been started with 1 drive (out of 2). root@Microknoppix:/home/knoppix# modprobe dm-mod root@Microknoppix:/home/knoppix# vgscan Reading all physical volumes. This may take a while... No volume groups found root@Microknoppix:/home/knoppix# cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdc1[2] 4193268 blocks super 1.2 [2/1] [U_] md1 : active raid1 sdc2[2] 524276 blocks super 1.2 [2/1] [U_] unused devices: <none> root@Microknoppix:/home/knoppix# mdadm -v --assemble --auto=yes /dev/md2 /dev/sdc3 mdadm: looking for devices for /dev/md2 mdadm: no recogniseable superblock on /dev/sdc3 mdadm: /dev/sdc3 has no superblock - assembly aborted root@Microknoppix:/home/knoppix# dumpe2fs /dev/md0 | grep -i superblock dumpe2fs 1.42.4 (12-Jun-2012) Primary superblock at 0, Group descriptors at 1-1 Backup superblock at 32768, Group descriptors at 32769-32769 Backup superblock at 98304, Group descriptors at 98305-98305 Backup superblock at 163840, Group descriptors at 163841-163841 Backup superblock at 229376, Group descriptors at 229377-229377 Backup superblock at 294912, Group descriptors at 294913-294913 Backup superblock at 819200, Group descriptors at 819201-819201 Backup superblock at 884736, Group descriptors at 884737-884737 root@Microknoppix:/home/knoppix# Notes: I can read the super block from the spare drive. I was gonna try and restore the superblock from one of the backups, but i dont know how or if this would work. I also heard creating a new array (mdadm --create) using the same parameters will not delete the data on the drive but i didnt want to risk it. Recommendations?

    Read the article

  • Operation times out trying to SSH outside LAN i.e. from internet to LAN no connection is established

    - by Pelle L
    I run Ubuntu 12.04 and have no success connecting with SSH from "Internet". The router is a TL-MR3420 which is set up to forward requests to one of the NIC's on ubuntu machine (which has in total 3 NICs). I can SSH from a client on the "local" network/LAN. The forward mechanism in the router seems to work. If I stop SSH service on the Ubuntu machine and instead start one on the windows machine - it works like a charm. I do not use the Std port 22 but that shouldn't be an issue as far as I understand - sine it works on the same port on the win machine. Since my public IS isn't static I use a dynDNS service but as said earlier the same setup works from the win machine. The router is located on 192.168.0.1 The Ubuntu NICs has the following IP: eth2 192.168.0.100 , eth1 192.168.0.101 , eth0 192.168.0.102 and I have forwarded the "outside" request to 192.168.0.100 In regards for firewall settings on the Ubuntu machine I have disabled the ufw and the command ufw status give status: inactive. I don't now it this is relevant information but teh command iptables --list give: Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination I have tried to catch traffic with help of wireshark (a tool I'm not too used to use) and it seems as a few (3?) "requests" actually reaches the NIC but ... nothing happens. The syslog does not show any entries during these attempts. Perhaps it could be some routing issues but I have reached my level of competence and are stuck ... all help and support to get this sorted out is much appreciated. I'm new to Linux so please do not assume I have a configuration that is correct - but as I wrote earlier - if the client that initiate SSH is on the LAN it all works. PS:I have also tried to get VPN (PPP) working from Internet with no success - once again VPN works on the windows machine ... so my best guess is that this is related to how the ubuntu machine handles (IP) traffic and not the TL-MR3420 router or other network issues.

    Read the article

  • Threads slowing down application and not working properly

    - by Belgin
    I'm making a software renderer which does per-polygon rasterization using a floating point digital differential analyzer algorithm. My idea was to create two threads for rasterization and have them work like so: one thread draws each even scanline in a polygon and the other thread draws each odd scanline, and they both start working at the same time, but the main application waits for both of them to finish and then pauses them before continuing with other computations. As this is the first time I'm making a threaded application, I'm not sure if the following method for thread synchronization is correct: First of all, I use two global variables to control the two threads, if a global variable is set to 1, that means the thread can start working, otherwise it must not work. This is checked by the thread running an infinite loop and if it detects that the global variable has changed its value, it does its job and then sets the variable back to 0 again. The main program also uses an empty while to check when both variables become 0 after setting them to 1. Second, each thread is assigned a global structure which contains information about the triangle that is about to be rasterized. The structures are filled in by the main program before setting the global variables to 1. My dilemma is that, while this process works under some conditions, it slows down the program considerably, and also it fails to run properly when compiled for Release in Visual Studio, or when compiled with any sort of -O optimization with gcc (i.e. nothing on screen, even SEGFAULTs). The program isn't much faster by default without threads, which you can see for yourself by commenting out the #define THREADS directive, but if I apply optimizations, it becomes much faster (especially with gcc -Ofast -march=native). N.B. It might not compile with gcc because of fscanf_s calls, but you can replace those with the usual fscanf, if you wish to use gcc. Because there is a lot of code, too much for here or pastebin, I created a git repository where you can view it. My questions are: Why does adding these two threads slow down my application? Why doesn't it work when compiling for Release or with optimizations? Can I speed up the application with threads? If so, how? Thanks in advance.

    Read the article

  • Oracle UK Technology "Tech Fest"

    - by rituchhibber
    ** As a priority partner, we are sending you advance notice of these exclusive “Technology Test Fest” free examination sessions. Please note that this communication will be sent out to the wider community one week from today, so please register immediately to secure your place! ** We are delighted to offer you the exclusive opportunity to register and attend the Oracle UK “Technology Test Fest” being held as Part of the UKOUG Conference at the ICC in Birmingham in the Drawing Room at the Hyatt Regency hotel adjacent to the ICC venue, from 3rd to 5th December  2012. This is your opportunity to sit your chosen Oracle Technology Specialist Implementation Exam free of charge on this day. Four sessions are being run (10.00AM and 14.00PM), with just 15 places at each session – so register now to avoid disappointment! (Exams take about 1.5 hours to complete.) Which Implementation Specialist Exams are available to take? Click here to see the list of exams available for you to sit for free at the Oracle UKOUG “Technology Test Fest”. The links also include the study guide for the particular exam. Please review the Specialization Guide as well. How do I register for the Oracle UK “Technology Test Fest”? Fill out the Pearson Vue profile here and complete it with your OPN Company ID. NB: Instructions on how to create/update the profile can be found here. Register for one of the 4 sessions using the registration links at the top of this page. 03rd December, 2012 at 14:00 04th December, 2012 at 10:00 am (Morning Session) 04th December, 2012 at 14:00 (Afternoon Session) 05th December, 2012 at 10:00 am (Morning Session) VENUE DETAILS: The Drawing Room Hyatt Regency Hotel Birmingham, 2 Bridge Street Birmingham, BI 2JZ 3 - 5 December 2012 You will need to bring your own laptop with 'Windows OS' and a form of identification to be able to take any of the exams. Need Help or Advice? For more information about the tests and Get Specialized programme, please contact: Ishacq Nada. For issues with your profile or any other OPN-related problems, please contact our: Oracle Partner Business Centre or call 08705 194 194. We look forward to welcoming you to the Oracle UK “Technology Test Fest” on the 3rd- 5thDecember 2012! Book early to avoid disappointment.

    Read the article

  • WebLogic Partner Community Newsletter October 2012

    - by JuergenKress
    Dear WebLogic partner community member Oracle OpenWorld and the JavaOne is just over with lots of product updates and highlights. In this newsletter you will find the key information on many new product and launches. Make sure you download the presentation from our WebLogic Community Workspace (WebLogic Community membership required), to train yourself and for your next customer meeting. Thanks for all the tweets tweets #WebLogicCommunity, the pictures at our facebook page and the nice blog posts from Guido & Lucas & Jan. Java One was a super sucess - JavaOne 2012: Strategy and Technical Keynote - Java 2,5 years after the acquisition - IDC report - make the future Java! If you want to become a Java Expert, make sure you attend one of our WebLogic 12c Bootcamps or our fist ExaLogic Hackers Night - November 19th Nürnberg Germany. All developers can use WebLogic free of charge! For developers, there are lots of ADF news on Oracle ADF Essentials & ADF training material now on the iPad By Grant Ronald & GlassFish Extension for Oracle JDeveloper & Installing, Configuring, and Testing WebLogic Server 12c Developer Zip Distribution in NetBeans. If you want to become a certified WebLogic company, WebLogic Server 12c Specialization is now available for you. You just need to go to the Knowledge Zone section, select the “Specialization” tab and click on “Apply Now” Now available: WebLogic Server 12c Implementation Specialist Boot Camp LVT. Now in Production: Oracle WebLogic Server 12c Implementation Specialist certification (1Z0-599) In our specialization benefit series we highlight this month the opportunity to promote your WebLogic services by google ads. Torsten Winterberg, OFM ACE Director published Mobile Web Applications – A guide for professional development. Please feel free to let us know if you publish a book or article! Hope to see you at the Middleware Day at UK Oracle User Group Conference 2012 in Birmingham. Jürgen Kress Oracle WebLogic Partner Adoption EMEA To read the newsletter please visit http://tinyurl.com/WebLogicnewsOctober2012 (OPN Account required) To become a member of the WebLogic Partner Community please register at http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: WebLogic Community newsletter,newsletter,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Help recovering broken OS (permissions issue)

    - by Guandalino
    (At the bottom there is an important update.) I was doing experiments in order to backup a remote account to my local system, Ubuntu 12.04 LTS. I'm not confident with duplicity and probably, due to wrong syntax, some local files have been replaced with remote files. This is just a supposition, I'm not sure this is the real cause of OS corruption. The corruption happened after experimenting with backups, so I think I did something wrong at this regard. I was aware there was a problem when I tried to access a command using sudo: $ sudo ls sudo: unable to open /etc/sudoers: Permission denied sudo: no valid sudoers sources found, quitting sudo: unable to initialize policy plugin This is how /etc/sudoers looks like: $ ls -ald /etc/sudoers -r--r----- 1 root root 788 Oct 2 18:30 /etc/sudoers At this point I tried to reboot and now this is the message I get: The system is running in low graphics mode. Your screen, graphics card and input device settings could not be detected correctly. You will need to configure these yourself. I tried to follow the wizard to configure these settings, but without luck (the system prevents me going on when I press "Next"). The thing that makes me a bit less worried is that all the data on the disk seems readable and I'm able to access them using a live cd. I run memtest and RAM seems to be OK. Do you have any idea about how to recover my system? I'm very glad to provide further information, just let me know what info could be helpful. UPDATE. The issue is about wrong permissions and this is how I discovered: I mounted the root partition of the broken OS on /mnt/broken/ (live CD) and did ls /mnt/broken/. I got a permission denied error, while I expected to have the directory listing. I had to do sudo ls /mnt/broken/ and this worked. Thus without having root permission via sudo it's impossible to access the root of broken os. The current output of ls -ld /mnt/broken/ is: drwxr-x--- 29 1000 812 4096 2012-12-08 21:58 /mnt/broken Any thoughts on how to restore the old (working) set of permissions?

    Read the article

  • UPK 11.1 Content Development Training Offering from Oracle University

    - by user581320
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Oracle University has announced the availability of UPK 11.1 Content Development training courses.  These classroom and virtual classroom courses are available starting later this month.  This course is designed for course authors, editors, and other individuals in need of recording and editing content in the User Productivity Kit Developer. Through hands-on exercises, participants will learn how to build an outline, prepare for and record content in the target application, and use the Topic Editor to customize recorded content. Students will learn how to link web pages and external files to their content as well as create and link questions and assessments. Upon completion, participants will preview their content in the available playback modes before publishing. They will also explore the various deployment options for publishing, including the options for printed documents, and learn how to customize templates for player and print output. For information about the UPK 11.1 Content Development Training course, including offering dates and locations, visit the Oracle University web page for the course.

    Read the article

  • How can you tell whether to use Composite Pattern or a Tree Structure, or a third implementation?

    - by Aske B.
    I have two client types, an "Observer"-type and a "Subject"-type. They're both associated with a hierarchy of groups. The Observer will receive (calendar) data from the groups it is associated with throughout the different hierarchies. This data is calculated by combining data from 'parent' groups of the group trying to collect data (each group can have only one parent). The Subject will be able to create the data (that the Observers will receive) in the groups they're associated with. When data is created in a group, all 'children' of the group will have the data as well, and they will be able to make their own version of a specific area of the data, but still linked to the original data created (in my specific implementation, the original data will contain time-period(s) and headline, while the subgroups specify the rest of the data for the receivers directly linked to their respective groups). However, when the Subject creates data, it has to check if all affected Observers have any data that conflicts with this, which means a huge recursive function, as far as I can understand. So I think this can be summed up to the fact that I need to be able to have a hierarchy that you can go up and down in, and some places be able to treat them as a whole (recursion, basically). Also, I'm not just aiming at a solution that works. I'm hoping to find a solution that is relatively easy to understand (architecture-wise at least) and also flexible enough to be able to easily receive additional functionality in the future. Is there a design pattern, or a good practice to go by, to solve this problem or similar hierarchy problems? EDIT: Here's the design I have: The "Phoenix"-class is named that way because I didn't think of an appropriate name yet. But besides this I need to be able to hide specific activities for specific observers, even though they are attached to them through the groups. A little Off-topic: Personally, I feel that I should be able to chop this problem down to smaller problems, but it escapes me how. I think it's because it involves multiple recursive functionalities that aren't associated with each other and different client types that needs to get information in different ways. I can't really wrap my head around it. If anyone can guide me in a direction of how to become better at encapsulating hierarchy problems, I'd be very glad to receive that as well.

    Read the article

  • Let&rsquo;s keep informed with &ldquo;Data Explorer&rdquo;

    - by Luca Zavarella
    At Pass Summit 2011 a new project was announced. It’s a Microsoft SQL Azure Lab and its codename is Microsoft “Data Explorer”. According to the official blog (http://blogs.msdn.com/b/dataexplorer/), this new tool provides an innovative way to acquire new knowledge from the data that interest you. In a nutshell, Data Explorer allows you to combine data from multiple sources, to publish and share the result. In addition, you can generate data streams in the RESTful open format (Open Data Protocol), and they can then be used by other applications. Nonetheless we can still use Excel or PowerPivot to analyze the results. Sources can be varied: Excel spreadsheets, text files, databases, Windows Azure Marketplace, etc.. For those who are not familiar with this resource, I strongly suggest you to keep an eye on the data services available to the Marketplace: https://datamarket.azure.com/browse/Data To tell the truth, as I read the above blog post, I was tempted to think of the Data Explorer as a "SSIS on Azure" addressed to the Power User. In fact, reading the response from Tim Mallalieu (Group Program Manager of Data Explorer) to the comment made to his post, I had a positive response to my first impression: “…we originally thinking of ourselves as Self-Service ETL. As we talked to more folks and started partnering with other teams we realized that would be an area that we can add value but that there were more opportunities emerging.” The typical operations of the ETL phase ( processing and organization of data in different formats) can be obtained thanks to Data Explorer Mashup. This is an image of the tool: The flexibility in the manipulation of information is given by Data Explorer Formula Language. This is a formula-based Excel-style specific language: Anyone wishing to know more can check the project page in addition to aforementioned blog: http://www.microsoft.com/en-us/sqlazurelabs/labs/dataexplorer.aspx In light of this new project, there is no doubt about the intention of Microsoft to get closer and closer to the Power User, providing him flexible and very easy to use tools for data analysis. The prime example of this is PowerPivot. The question that remains is always the same: having in a company more Power User will implicitly mean having different data models representing the same reality. But this would inevitably lead to anarchical data management... What do you think about that?

    Read the article

  • Building Enterprise Smartphone App &ndash; Part 3: Key Concerns

    - by Tim Murphy
    This is part 3 in a series of posts based on a talk I gave recently at the Chicago Information Technology Architects Group.  Feel free to leave feedback. Keys Concerns Of Smartphones In The Enterprise These are the factors that you need to be aware of and address in order to build successful enterprise smartphone applications.  Most of them have nothing to do with the application itself as you will see here. Managing Devices Managing devices is a factor that is going to effect how much your company will have to spend outside of developing the applications.  How will you track the devices within the corporation?  How often will you have to replace phones and as a consequence have to upgrade your applications to support new phones?  The devices can represent a significant investment of capital.  If these questions are not addressed you will find a number of hidden costs throughout the life of your solution. Purchase or BYOD We have seen the trend of Bring Your Own Device (BYOD) lately within the enterprise.  How many meetings have you been in where someone is on their personal iPad, iPhone, Android phone or Windows Phone?  The issue is if you can afford to support everyone's choice in device? That is a lot to take on even if you only support the current release of each platform. Do you go with the most popular device or do you pick a platform that best matches your current ecosystem and distribute company owned devices?  There is no easy answer here, but you should be able give some dollar value to both hardware and development costs related to platform coverage. Asset Tracking/Insurance Smartphones are devices that are easier to lose or have stolen than laptops and desktops. Not only do you have your normal asset management concerns but also assignment of financial responsibility. You also will need to insure them against damage and theft and add legal documents that spell out the responsibilities of the employees that use these devices. Personal vs. Corporate Data What happens when you terminate an employee?  How do you recover the device?  What happens when they have put personal data on the device?  These are all situation that can cause possible loss of corporate intellectual property or legal repercussions of reclaiming a device with personal data on it.  Policies need to be put in place that protect the company from being exposed to type of loss.  This can mean significant legal and procedural cost that you need to consider. Coming Up In the last installment of this series I will cover application development considerations. del.icio.us Tags: Smartphones,Enterprise Smartphone Apps,Architecture

    Read the article

  • ExplorerCanvas and JQuery

    - by PhubarBaz
    I am working on a Javascript app (CloudGraph) that uses HTML5 canvas and JQuery. I'm using ExplorerCanvas to support canvas in IE. I recently came across an interesting problem. What I was trying to do is restore the user's settings when the page is loaded. I read some information from a cookie that I set the last time the user accessed the application. One of these settings is the size of the canvas. I decided that the best place to do this would be when the document is ready using JQuery $(document).ready(). This worked fine in browsers that natively support the canvas element. But in IE it kept getting errors the first time I would hit the page. It seemed that the excanvas element wasn't initialized yet because I was getting null reference and unknown properties errors. If I refreshed the page the errors went away but the resized canvas wasn't drawing on the entire area of the canvas. It was like the clipping rectangle was still set to the default canvas size. I found that the canvas element when using excanvas has a div child element which is where the actual drawing takes place. When I changed the width and height of the canvas element in document.ready it didn't change the width and height of the child div. Initially my solution was to also change the div element when changing the canvas element and that worked. But then I realized that having to refresh the page every time I started the app in IE really sucked. That wouldn't be acceptable for users. Since it seemed like the canvas wasn't getting initialized before I was trying to use it I decided to try to initialize my app at a different time. I figured the next best place was in the onload event. Sure enough, moving my initialization to onload fixed all of the problems. So, it looks like the canvas shouldn't be manipulated until the onload event when using ExplorerCanvas. There might be ways to do it when the document is ready. I found some posts on initializing excanvas manually, but for me waiting until onload worked just fine.

    Read the article

  • How to properly document functionality in an agile project?

    - by RoboShop
    So recently, we've just finished the first phase of our project. We used agile with fortnightly sprints. And whilst the application turned out well, we're now turning our eyes on some of the maintenance tasks. One maintenance task is that all of our documentation appears in the form of specs. These specs describe 1 or more stories and generally are a body of work which a few devs could knock over in a week. For development, that works really well - every two weeks, the devs get handed a spec and it's a nice discrete chunk of work that they can just do. From a documentation point of view, this has become a mess. The problem with writing specs that are focused on delivering just-in-time requirements to developers is we haven't placed much emphasis on the big picture. Specs come from all different angles - it could be describing a standard function, it could describing parts of a workflow, it could be describing a particular screen... And now, we have business rules about our application scattered across 120 documents. Looking for any document for a particular business rule or function in particular is quite hard because you don't know which document has this information, and making a change request is equally hard because once again, we are unsure about which spec to make the change. So we have maybe a couple of weeks of lull before it's back to specing out functionality for the next phase but in this time, I'd like to re-visit our processes. I think the way we have worked so far in terms of delivering fortnightly specs works well. But we also need a way to manage our documentation so that our business rules for a given function / workflow are easy to locate / change. I have two ideas. One is we compile all of our specs into a series of master specs broken by a few broad functional areas. The specs describe the sprint, the master spec describe the system. The only problem I can see is 1) Our existing 120 specs are not all neatly defined into broad functional areas. Some will require breaking up, merging etc. which will take a lot of time. 2) We'll be writing specs and updating master specs in each new sprint. Seems like double the work, and then do the devs look at the spec or the master spec? My other suggestion is to concede that our documentation is too big of a mess, and manage that mess going forward. So we go through each spec, assign like keywords to it, and then when we want to search for a function, we search for that keyword. Problems I can see 1) Still the problem of business rules scattered everywhere, keywords just make it easier to find it. anyway, if anyone has any decent ideas or any experience to share about how best to manage documentation, would really appreciate it.

    Read the article

  • Ad-hoc reporting similar to Microstrategy/Pentaho - is OLAP really the only choice (is OLAP even sufficient)?

    - by TheBeefMightBeTough
    So I'm getting ready to develop an API in Java that will provide all dimensions, metrics, hierarchies, etc to a user such that they can pick and choose what they want (say, e.g., dimensions of Location (a store) and Weekly, and the metric Product Sales $), provide their choices to the api, and have it spit out an object that contains the answer to their question (the object would probably be a set of cells). I don't even believe there will be much drill up/down. The data warehouse the APIwill interface with is in a standard form (FACT tables, dimensions, star schema format). My question is, is an OLAP framework such as Mondrian the only way to achieve something akin to ad-hoc reporting? I can envisage a really large Cube (or VirtualCube) that contains most of the dimensions and metrics the user could ever want, which would give the illusion of ad-hoc reporting. The problem is that there is a ton of setup to do (so much XML) to get the framework to work with the data. Further it requires specific knowledge, such as MDX, and even moreso learning the framework peculiars (Mondrian API). Finally, I am not positive it will scale much better than simply making queries against a SQL database. OLAP to me feels like very old technology. Is performance really an issue anymore? The alternative I can think of would be dynamic SQL. If the existing tables in the data warehouse conform to a naming scheme (FACT_, DIM_, etc), or if a very simple config file/ database table containing config information existed that stored which tables are fact tables, which are dimensions, and what metrics are available, then couldn't the api read from that and assembly the appropriate sql query? Would this necessarily be harder than learning MDX, Mondrian (or another OLAP framework), and creating all the cubes? In general, I feel that OLAP is at the same time too powerful (supports drill up/down, complex functions) and outdated and am reluctant to base my architecture on it. However, I am unsure if the alternative(s), such as rolling my own ad-hoc reporting framework using dynamic SQL would remove any complexity while still fulfilling requirements, both functional and non-functional (e.g., scalability; some FACT tables have many millions of rows). I also wonder about other techniques (e.g., hive). Has anyone here tried to do ad-hoc reporting? Any advice? I expect this project to take a pretty long time (3 months min, but probably longer), so I just do not want to commit to an architecture without being absolutely sure of its pros and cons. Thanks so much.

    Read the article

  • git commit –m “CodePlex now supports Git!”

    Finally, yes, CodePlex now supports Git! Git has been one of the top rated requests from the CodePlex community for some time: Admittedly, when we launched CodePlex, we never expected that at some point we would be running a source control system originally invented by Linus Torvalds to use for the Linux kernel. Though I would also say, nobody would have thought the open source ecosystem would be as important to Microsoft as it has become now. Giving CodePlex users what they ask for and supporting their open source efforts has always been important to us, and we have a long list of improvements planned, so stay tuned as we have more up our sleeves! Why Git? So why Git? CodePlex already has Mercurial for distributed version control and TFS (which also supports subversion clients) for centralized version control. The short answer is that the CodePlex community voted, loud and clear, that Git support was critical. Additionally, we just like it, we use Git on our team every day and making the DVCS workflows more available to the CodePlex community is just the right thing to do. Forks and Pull Requests One of the capabilities that distributed version control systems, such as Mercurial and Git, enable is the Fork and Pull Request workflow.  Just like with Mercurial, projects configured to use Git enable Forking the source and submitting contributions back via Pull Requests. The Fork/Pull Request workflow is a key accelerator to many open source projects and you will see improvements in our support coming later this year. More Choice With the addition of Git, now CodePlex has three options when it comes to Open Source project hosting. Projects can now select between TFS, Mercurial, and Git. Each developer has their own preferences, and for some, centralized version control makes more sense to them. For others, DVCS is the only way to go. We’re equally committed to supporting both these technologies for our users. You can get started today by creating a new project or contribute to an existing project by creating a fork. For help on getting started with Git on CodePlex, see our help documentation here. If you would like to switch your project to use Git, please contact us at CodePlex Support with your project information, and we will be happy to help you out. We're Listening CodePlex is your community, and we want to deliver the experiences you need to have a successful open source project. We want your ideas and feedback to make CodePlex a great development community.  The issue tracker on CodePlex is publicly available. Add suggestions or vote up existing suggestions. And you can always find us on Twitter, I’m @mgroves84; follow us to keep up to date with our latest releases: @codeplex

    Read the article

  • Fusion Middleware Newsletter - October Edition is Now Out

    - by Tanu Sood
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} From the latest Oracle Fusion Middleware product releases to the Oracle AppAdvantage use case on Cloud and On-Premise Integration to the very latest in Developer corner and more, the October edition of Oracle Fusion Middleware is chock full of information. Catch the latest edition to learn about the highlights in the latest releases for Oracle GoldenGate 12c and Oracle Data Integrator 12c; and Oracle WebCenter. While there, don’t miss the latest news and upcoming events for Oracle Fusion Middleware and Developers. Find out who we have in the Team Spotlight this edition and watch the latest customer success stories across the portfolio. Did we miss anything? Would you like to hear more about a particular topic? Let us know. Simply drop us a comment and we’d be sure to discuss that in our next editorial meeting. In the meantime, grab a coffee and enjoy the October edition of the newsletter.

    Read the article

  • I have a performance problem

    - by Alan
    (copied from my wordpress blog). So start 95% of the performance calls that I receive. They usually continue something like: I have gathered some *stat data for you (eg the guds tool from Document 1285485.1), can you please root cause our problem? So, do you think you could? Neither can I, based on this my answer inevitably has to be "No". Given this kind of problem statement, I have no idea about the expectations, the boundary conditions, or even the application. The answer may as well be "Performance problems? Consult your local Doctor for Viagra". It's really not a lot to go on. So, What kind of problem description is going to allow me to start work on the issue that is being seen? I don't doubt that there really is an issue, it just needs to be pinned down somewhat. What behavior exactly are you expecting to see? Be specific and use business metrics. For example "run-time", "response-time" and "throughput". This helps us define exit criterea. Now, let's look at the system that is having problems. How is what you are seeing different? Use the same type of metrics. The answers to these two questions take us a long way towards being able to work a call. Even more helpful are answers to questions like Has this system ever worked to expectation? If so, when did it start exhibiting this behavior? Is the problem always present, or does it sometimes work to expectation? If it sometimes works to expectation, when are you seeing the problem? Is there any discernible pattern? Is the impact of the problem getting better, worse, or remaining constant? What kind of differences are there between when the system was performing to expectation and when it is not? Are there other machines where we could expect to see the same issue (eg similar usage and load), but are not? Again, differences? Once we start to gather information like this we start to build up a much clearer picture of exactly what we need to investigate, and what we need to achieve so that both you and me agree that the problem has been solved. Please help get that figure of poorly defined problem statements down from it's current 95% value.

    Read the article

  • New laptop, Windows 8.1, attempting dual install. Ubuntu installer doesn't 'see' existing OS

    - by Flaminica
    Though I've used Ubuntu for a few years, I'm new to installation. Previously I had help and now I'm doing it alone (moved across the world). Windows 8.1 came preinstalled on my new laptop (Toshiba Satellite C70-A-17C - Core i5, 8 GB RAM, 750 GB HDD). I have already followed a few steps I found online to prepare for a dual install (with Ubuntu 14.04). I backed up Windows, created a bootable Ubuntu USB and DVD (just in case one didn't work), turned off fast boot and secure boot, and shrunk C:/. The new unallocated drive portion is 292.97 GB. After shrinking C:/, I restarted Windows a couple of times to make sure everything was working fine (it is). I then attempted to install with the Ubuntu live USB. However, the Ubuntu installer doesn't see that Windows 8.1 is already installed. I don't understand, and don't want to mess with Ubuntu partitioning when I don't know where the partitions will be created. My concern is that, if I go further with the installation process, Windows might be overwritten or compromised in some way. I then tried to reboot using the Ubuntu live DVD, thinking I might get a different result. However, I can't figure out how to make the laptop boot from the CD drive. I went into the BIOS and found no option there, either. Any help is very appreciated! EDIT: Looks like I can't link directly to each photo. Here is my album of screenshots: http://imgur.com/a/zChCo Here you can see that there's no option to boot from CD drive, only USB. Everything looks okay so far. I don't understand this. Ubuntu has not yet been installed. Unmounting partitions? (I chose 'no'.) Even though the laptop came pre-installed with Windows 8.1, the Ubuntu USB installer can't see it. I chose 'something else'. I need to pick and format partitions. I scrolled down and took a second shot to include all information. Completely lost and cancelled installation.

    Read the article

  • IPv6, isn't it just a few extra bits?

    - by rclewis
    It's always an interesting task, to try and explain what you do to family and friends. I have described IPv6 as the "Next Generation Internet"  or "Second Internet" but the hollow expressions on my kids faces scream for the instant relief of the latest video game.  Never one to give up easily, I have formulated a new example - the Post Office... Similar to the Post Office the Internet delivers mail and packages based on addresses. As the number of residences, businesses, and delivery locations increased, the 5 digit ZIP Code (Washington, DC 20005) was expanded to ZIP+4  allowing for more precise delivery points (Postmaster General, Washington, DC 20260-3100). Ah, if only computers were as simple.  IPv6 isn't an add-on or expansion of the existing IPv4 Addressing, it is a new addressing model which will allow the internet to grow from a single computer in the basement of a university or your parents kitchen table, to support the multitude of smart phones, smart TV's, tablets, dvr's, and disk players, all clambering to connect for information. Unfortunetly there are only a finite number of IPv4 public addresses left, and those are being consumed at an ever increasing rate. Few people could have predicted the explosive growth of the internet or the shortage of IPv4 addresses we now face - but there is a "Plan B" and that is the vastly larger address space of IPv6.  Many in the industry have labeled this a "business continuity" problem,  when in fact most companies will be able to continue conducting business once they run out of existing IPv4 Addresses. The problem is really a Customer Continuity problem, how will businesses communicate with existing customers and reach new customers online who's only option is to adopt IPv6 when IPv4 is depleted? Perhaps a first step is publishing a blog that is also accessible via IPv6, it's just a few extra bits. Join us for the Oracle OpenWorld 2012 Session:   Navigating IPv6 @ Oracle Thursday, Oct. 4th 2:15PM - 3:15PM  Palace Hotel - Concert   Learn more about IPv6 Technologies at Oracle

    Read the article

  • No Sound in 12.04 on Acer TimelineX 3830TG

    - by Fawkes5
    When i first installed ubuntu 12.04 everything worked fine. All of a sudden my sound has stopped working!! This has happened before [like 2 days ago], however the next day it started working again. That may happen again...[I've already rebooted 5 times] My Sound Card is a HDA intel PCH My Sound Chip is Intel CougarPoint HDMI My machine is an Acer TimelineX 3830TG, heres my lcpci output 00:00.0 Host bridge: Intel Corporation 2nd Generation Core Processor Family DRAM Controller (rev 09) 00:01.0 PCI bridge: Intel Corporation Xeon E3-1200/2nd Generation Core Processor Family PCI Express Root Port (rev 09) 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) 00:16.0 Communication controller: Intel Corporation 6 Series/C200 Series Chipset Family MEI Controller #1 (rev 04) 00:1a.0 USB controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #2 (rev 04) 00:1b.0 Audio device: Intel Corporation 6 Series/C200 Series Chipset Family High Definition Audio Controller (rev 04) 00:1c.0 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 1 (rev b4) 00:1c.1 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 2 (rev b4) 00:1c.2 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 3 (rev b4) 00:1c.3 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 4 (rev b4) 00:1d.0 USB controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #1 (rev 04) 00:1f.0 ISA bridge: Intel Corporation HM65 Express Chipset Family LPC Controller (rev 04) 00:1f.2 SATA controller: Intel Corporation 6 Series/C200 Series Chipset Family 6 port SATA AHCI Controller (rev 04) 00:1f.3 SMBus: Intel Corporation 6 Series/C200 Series Chipset Family SMBus Controller (rev 04) 01:00.0 VGA compatible controller: NVIDIA Corporation GF108 [GeForce GT 540M] (rev ff) 02:00.0 Ethernet controller: Atheros Communications Inc. AR8151 v2.0 Gigabit Ethernet (rev c0) 03:00.0 Network controller: Atheros Communications Inc. AR9287 Wireless Network Adapter (PCI-Express) (rev 01) 04:00.0 Unassigned class [ff00]: Realtek Semiconductor Co., Ltd. RTS5116 PCI Express Card Reader (rev 01) 05:00.0 USB controller: NEC Corporation uPD720200 USB 3.0 Host Controller (rev 04) I'm running a dual boot with Windows 7, sound always works there. I've tried loading earlier kernels [3.2.0 ] but that hasn't helped. I've tried playing with alsamixer, unmuting and raising volume, doesn't work I've tried playing with pauvcontrol, doesnt work. Althought, strangely, when i play a song, under output i can see a bar moving implying that something is working, just the sound isn't getting through my speakers. Thus it might be a hardware problem. I've tried unistalling and reinstalling pulsaudio annd alsamixer, doesn't work [maybe made it worse...]. I've heard reverting back to kernel 3.0... works, but i'm reluctant to try that[i dont know much about kernels]. Thank you. Please tell me what other information you need. EDIT: My audio has started working again after another reboot. However this time im not connected to my external audio on startup. I will see if this is the issue next time i'm at home Edit: Connecting to external audio is not the problem.

    Read the article

< Previous Page | 769 770 771 772 773 774 775 776 777 778 779 780  | Next Page >