Search Results

Search found 11365 results on 455 pages for 'authorization basic'.

Page 320/455 | < Previous Page | 316 317 318 319 320 321 322 323 324 325 326 327  | Next Page >

  • Perl syntax error [closed]

    - by Linny
    I am a beginner taking a Perl programming course. We are trying to write a basic program for counting nucleotides in a DNA string. I'm getting syntax errors on the lines that have a single bracket on lines 28 & 70 and don't know why. It also reads that I have compilation errors. I have no idea where to start figuring that out. # The purpose of this program is to count the number of nucleotides in a strand. Each protein is counted separately # print "/n NOTE: Nucleotide counting /n"; # use strict; # enforce variable declarations use warnings; # enable compiler warnings # Display number of A,a,T,t,G,g,C,c, nucleotides in a word or sequence of letters. # my ($base) = ''; # an extracted letter from a string my ($nuceotide_count) = 0 ; # the current position within the word my ($position) = 0 ; # number of vowels in user-supplied word my ($word) = ''; # word to be processed my ($A_count) = 0 ; # of A nucleotides in the user-supplied sequence my ($a_count) = 0 ; # of A nucleotides in the user-supplied sequence my ($C_count) = 0 ; # of C nucleotides in the user-supplied sequence my ($c_count) = 0 ; # of C nucleotides in the user-supplied sequence my ($G_count) = 0 ; # of G nucleotides in the user-supplied sequence my ($g_count) = 0 ; # of G nucleotides in the user-supplied sequence my ($T_count) = 0 ; # of T nucleotides in the user-supplied sequence my ($t_count) = 0 ; # of T nucleotides in the user-supplied sequence word = (STDIN) for ($position = 0);($position if (($base eq 'a') or ($base eq 'A')) { ++$A_count; } # end if ++$position; if (($base eq 'T') or ($base eq 't')) { ++$T_count; } end if ++$position; if (($base eq 'G') or ($base eq 'g')) { ++$G_count; } # end if ++$position; if (($base eq 'C') or ($base eq 'c')) { ++$C_count; } # end if ++$position; } # end for # Display final results. # print " \n The number of A or a neucleotides is: $A_count"; print " \n The number of T or t neucleotides is: $T_count"; print " \n The number of G or g neucleotides is: $G_count"; print " \n The number of C or c neucleotides is: $C_count"; print " \n\n Program completed successfully. \n" ; exit ;

    Read the article

  • SharePoint 2010 Hosting :: How to Enable Office Web Apps on SharePoint 2010

    - by mbridge
    Office Web App is the online version of Microsoft Office 2010 which is very helpful if you are going to use SharePoint 2010 in your organization as it allows you to do basic editing of word document without installing the Office Suite in the client machine. Prerequisites : - Microsoft Server 2008 R2 - Microsoft SharePoint Server 2010 or Microsoft SharePoint Foundation 2010 - Microsoft Office Web Apps. If you have installed all the above products, just follow this steps: 1. Go to Central Administration > Click on Manage Service Application. 2. All the menus are not displayed in ribbon Menu format which was first introduced in Office 2007. Click on New > Word Viewing Services ( You can choose PowerPoint or Excel also, steps are same ). This will open a pop window. Adding Services for Office Web Apps 3. Give a Proper Name which can have your companies or project name. 4. Under Application Pool select : SharePoint Web Services Default. 5. Next keep the check box checked which says : Add this service application’s proxy to the farm’s default proxy list. Click Ok Adding Word Viewer as Service Application Office Web Apps as Services in Sharepoint 2010 6. This will install all the Office Web App services required. You can see the name as you gave in the above step. How to Activate Office Web Apps in Site Collection? 1. Go to the site for which you want to activate this feature. 2. Click on Site Action > Site Settings > Site Collection Administrator > Site Collection Features 3. Activate Office Web Apps. Activate Office Web Apps Feature in Site Collection How to make sure Office Web Apps is working for your site collection? 1. Locate any office document you have and click on the smart menu which appears when you hover your mouse on it. Dont double-click as this will launch the document in Office Client if its installed. This feature can be changed. 2. If you see View or Edit in Browser as menu item, your Office Web Apps is configured correctly. View Edit Office Document in Browser Editing Office Document in Browser Another post related SharePoint 2010: 1. How to Configure SharePoint Foundation 2010 for SharePoint Workspace 2010 2. Integrating SharePoint 2010 and SQL 2008 R2

    Read the article

  • How to detect which edges of a rectange touch when they collide in iOS

    - by Mike King
    I'm creating a basic "game" in iOS 4.1. The premise is simple, there is a green rectangle ("disk") that moves/bounces around the screen, and red rectangle ("bump") that is stationary. The user can move the red "bump" by touching another coordinate on the screen, but that's irrelevant to this question. Each rectangle is a UIImageView (I will replace them with some kind of image/icon once I get the mechanics down). I've gotten as far as detecting when the rectangles collide, and I'm able to reverse the direction of the green "disk" on the Y axis if they do. This works well when the green "disk" approaches the red "bump" from top or bottom, it bounces off in the other direction. But when it approaches from the side, the bounce is incorrect; I need to reverse the X direction instead. Here's the timer I setup: - (void)viewDidLoad { xSpeed = 3; ySpeed = -3; gameTimer = [NSTimer scheduledTimerWithTimeInterval:0.05 target:self selector:@selector(mainGameLoop:) userInfo:nil repeats:YES]; [super viewDidLoad]; } Here's the main game loop: - (void) mainGameLoop:(NSTimer *)theTimer { disk.center = CGPointMake(disk.center.x + xSpeed, disk.center.y + ySpeed); // make sure the disk does not travel off the edges of the screen // magic number values based on size of disk's frame // startAnimating causes the image to "pulse" if (disk.center.x < 55 || disk.center.x > 265) { xSpeed = xSpeed * -1; [disk startAnimating]; } if (disk.center.y < 55 || disk.center.y > 360) { ySpeed = ySpeed * -1; [disk startAnimating]; } // check to see if the disk collides with the bump if (CGRectIntersectsRect(disk.frame, bump.frame)) { NSLog(@"Collision detected..."); if (! [disk isAnimating]) { ySpeed = ySpeed * -1; [disk startAnimating]; } } } So my question is: how can I detect whether I need to flip the X speed or the Y speed? ie: how can I calculate which edge of the bump was collided with?

    Read the article

  • Webcast Q&A: Qualcomm Provides a Seamless Experience for Customers with Oracle WebCenter

    - by kellsey.ruppel
    Last Thursday we had the second webcast in our WebCenter in Action webcast series, "Qualcomm Provides a Seamless Experience for Customers with Oracle WebCenter, where customer Michael Chander from Qualcomm and Vince Casarez & Gourav Goyal from Oracle Partner Keste shared how Oracle WebCenter is powering Qualcomm’s externally facing website and providing a seamless experience for their customers. In case you missed it, here's a recap of the Q&A.   Mike Chandler, Qualcomm Q: Did you run into any issues when integrating all of the different applications together?A: Definitely, our main challenges were in the area of user provisioning and security propagation, all the standard stuff you might expect when hooking up SSO for authentication and authorization. In addition, we spent several iterations getting the UI’s in sync. While everyone was given the same digital material to build too, each team interpreted and implemented it their own way. Initially as a user navigated, if you were looking for it, you could slight variations in color or font or width , stuff like that. So we had to pull all the developers responsible for the UI together and get pixel level agreement on a lot of things so we could ensure seamless transitions across applications. Q: What has been the biggest benefit your end users have seen?A: Wow, there have been several. An SSO enabled environment was huge a win for our users. The portal application that this replaced had not really been invested in by the business. With this project, we had full business participation and backing, and it really showed in some key areas like the shopping experience. For example, while ordering in the previous site, the items did not have any pictures or really usable descriptions. A tremendous amount of work was done to try and make the site more intuitive and user friendly. Site performance has also drastically improved thanks to new hardware, improved database design, and of course the fact that ADF has made great strides in runtime performance. Q: Was there any resistance internally when implementing the solution? If so, how did you overcome that?A: Within a large company, I’m sure there is always going to be competition for large projects, as there was here. Once we got through the technical analysis and settled on the technology choices, it was actually no resistance to implementing the solution. This project was fully driven by the business with the aim of long term growth. I can confidently say that the fact that this project was given the utmost importance by both the business and IT really help put down any resistance that you would typically see while implementing a new solution. Q: Given the performance, what do you estimate to be the top end capacity of the system? A:I think our top end capacity is really only limited by our hardware. I’m comfortable saying we could grow 10x on our current hardware, both in terms of transactions and users. We can easily spin up new JVM instances if needed. We already use less JVM’s than we had planned. In addition, ADF is doing a very good job with his connection pooling and application module pooling, so we see a very good ratio of users connected to the systems vs db connections, without impacting performace. Q: What's the overview or summary of feedback from the users interacting with the site?A: Feedback has been overwhelmingly positive from both the business and our customers. They’re very happy with the new SSO environment , the new LAF, and the performance of the site. Of course, it’s not all roses. No matter what, there are always going to be people that don’t like the layout or the color scheme, etc. By and large though, customers are happy and the business is happy. Q: Can you describe the impressions about the site before and after the project within Qualcomm?A: Before the project, the site worked and people were using it, but most people were not happy with it. It was slow and tended to be a bit tempermental, for example a user would perform a transaction and the system would throw and unexpected error. The user could back up and retry the steps and things would work fine, so why didn’t work the first time?. From a UI perspective, we’d hear comments like it looked like it was built by a high school student.  Vince Casarez & Gourav Goyal, Keste Q: Did you run into any obstacles when implementing the solution?A: It's interesting some people call them "obstacles" on this project we just called them "dependencies".  There were both technical and business related dependencies that we had to work out. Mike points out the SSO dependencies and the coordination and synchronization between the teams to have a seamless login experience and a seamless end user experience.  There was also a set of dependencies on the User Acceptance testing to make sure that everyone understood the use cases for how the system would be used.  With a branching into a new market and trying to match a simple user experience as many consumer sites have today, there was always a tendency for the team members to provide their suggestions on how things could be simpler.  But with all the work up front on the user design and getting the business driving this set of experiences, this minimized the downstream suggestions that tend to distract a team.  In this case, all the work up front allowed us to enumerate the "dependencies" and keep the distractions to a minimum. Q: Was there a lot of custom work that needed to be done for this particular solution?A: The focus for this particular solution was really on the custom processes. The interesting thing is that with the data flows and the integration with applications, there are some pre-built integrations, but realistically for the process flow, we had to build those. The framework and tooling we used made things easier so we didn’t have to implement core functionality, like transitioning from screen to screen or from flow to flow. The design feature of Task Flows really helped speed the development and keep the component infrastructure in line with the dynamic processes.  Task flows and other elements like Skins are core to the infrastructure or technology stack of Oracle. This then allowed the team to center the project focus around the business flows and use cases to meet the core requirements and keep the project on time. Q: What do you think were the keys to success for rolling out WebCenter?A:  The 5 main keys to success were: 1) Sponsorship from the whole organization around this project from senior executive agreement, business owners driving functionality, and IT development alignment; 2) Upfront design planning and use case definition to clearly define the project scope and requirements; 3) Focussed development and project management aligned with the top level goals and drivers; 4) User acceptance and usability testing along the way to identify potential issues and direct resolution of the issues;  and 5) Constant prioritization of the issues for development to fix by the business.  It also helps to have great team chemistry and really smart people working on the project. If you missed the webcast, be sure to catch the replay to see a live demonstration of WebCenter in action!  Qualcomm Provides a Seamless Experience for Customers with Oracle WebCenter from Oracle WebCenter

    Read the article

  • How to suggest using an ORM instead of stored procedures?

    - by Wayne M
    I work at a company that only uses stored procedures for all data access, which makes it very annoying to keep our local databases in sync as every commit we have to run new procs. I have used some basic ORMs in the past and I find the experience much better and cleaner. I'd like to suggest to the development manager and rest of the team that we look into using an ORM Of some kind for future development (the rest of the team are only familiar with stored procedures and have never used anything else). The current architecture is .NET 3.5 written like .NET 1.1, with "god classes" that use a strange implementation of ActiveRecord and return untyped DataSets which are looped over in code-behind files - the classes work something like this: class Foo { public bool LoadFoo() { bool blnResult = false; if (this.FooID == 0) { throw new Exception("FooID must be set before calling this method."); } DataSet ds = // ... call to Sproc if (ds.Tables[0].Rows.Count > 0) { foo.FooName = ds.Tables[0].Rows[0]["FooName"].ToString(); // other properties set blnResult = true; } return blnResult; } } // Consumer Foo foo = new Foo(); foo.FooID = 1234; foo.LoadFoo(); // do stuff with foo... There is pretty much no application of any design patterns. There are no tests whatsoever (nobody else knows how to write unit tests, and testing is done through manually loading up the website and poking around). Looking through our database we have: 199 tables, 13 views, a whopping 926 stored procedures and 93 functions. About 30 or so tables are used for batch jobs or external things, the remainder are used in our core application. Is it even worth pursuing a different approach in this scenario? I'm talking about moving forward only since we aren't allowed to refactor the existing code since "it works" so we cannot change the existing classes to use an ORM, but I don't know how often we add brand new modules instead of adding to/fixing current modules so I'm not sure if an ORM is the right approach (too much invested in stored procedures and DataSets). If it is the right choice, how should I present the case for using one? Off the top of my head the only benefits I can think of is having cleaner code (although it might not be, since the current architecture isn't built with ORMs in mind so we would basically be jury-rigging ORMs on to future modules but the old ones would still be using the DataSets) and less hassle to have to remember what procedure scripts have been run and which need to be run, etc. but that's it, and I don't know how compelling an argument that would be. Maintainability is another concern but one that nobody except me seems to be concerned about.

    Read the article

  • What location to put bootloader, when running multiple drives and partition

    - by Matt G
    I have Win8 on my desktop, where a 120G SSD is used to run windows and some select applications, while I have a 2TB HDD to provide basic file storage and where possible, install applications instead of on the SSD. I want to install Ubuntu on a new partition of the HDD (I allocated 300GB, with 5GB swap file). I've used a USB to install the OS, which seemed to have done the job. However, after prompting for a restart, I can no longer boot to ubuntu. During instillation I was confused about where to install the "boot loader instillation". I ended up selecting "/dev/stb" because I figured I would be able to boot with BIOS by selecting the HDD drive as a priority over the SSD. The bootloader is a large part of where I think I went wrong. My partition system looked something like this: /dev/sta ... //SSD ~120 GB /dev/sta1 NTFS (350 MB) //Win8System /dev/sta2 NTFS (118 GB) //Win8C-Drive /dev/stb ... //HDD ~2TB /dev/stb1 NTFS (1563 GB) //FileStorage /dev/stb5 Free Space (300 GB) //Space I want to use for Linux (NOTE: Created two partitions from the 300GB, ~5GB and 295GB. stb5,stb6.) It'd be great if I could get an explanation of what drive you'd select for the boot loader and why, and what selections won't work with regards to the Boot Loader Instillation. I think I understand what Grub is, but I have no idea on how to use it, or play around with it. I seem to be able to get back into OS from my usb, however I believe it's just showing me a preview/trial of Ubuntu (ie, can't access any of the system NTFS drives). Note, if I try to install from the USB again, it will recognize that a version of Ubuntu 13.10 exists on the system. Apologies in advance, have used windows all my life, don't really know to much about Linux at all. Did have a brief skim over some similar questions, didn't find anything too useful. - Where to install bootloader when installing Ubuntu as secondary OS? - ubuntu 12.10 dual boot with windows 8 on two hdds - Dual-boot Windows 7 and Ubuntu on two SSDs with UEFI

    Read the article

  • Unable to connect to mail server via IMAP and roundcube

    - by mrhatter
    I am having trouble getting the final parts of my mail server up and working. I followed this tutorial to get everything set up on the mail server side. I have installed roundcube for webmail and configured it but it is saying "error connecting, connection refused" when attempting to connect to it using IMAP. This is thorough the "test imap" section of its installer. Also it is giving me an error message about perissions for it's log and temp folders but that's not as important as acutally getting mail to work. I have also tried connecting to the mail server using thunderbird however it cannot establish a connection either and I know my login information is correct. I know that the databases are working correctly based on the roundcube installer telling me that they have been "successfully initialized". Here are my firewall rules -A INPUT -i lo -j ACCEPT -A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A INPUT -p tcp -m tcp --dport 22 -j ACCEPT -A INPUT -p tcp -m tcp --dport 25 -j ACCEPT -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT -A INPUT -p tcp -m tcp --dport 443 -j ACCEPT -A INPUT -p tcp -m tcp --dport 465 -j ACCEPT -A INPUT -p tcp -m tcp --dport 487 -j ACCEPT -A OUTPUT -p tcp -m tcp --dport 993 -j ACCEPT -A INPUT -j DROP Which I set up in iptables. I have modified them from what I used in this tutorial I'm not sure what to try next. Any help would be wonderful! I am using Ubuntu 14.04 server, apache 2.4.7, roundcube 1.0.1, and the latest versions of dovecot and postfix. The email databases are contained in mysql. I am running this on a VPS server. UPDATE: I have changed from iptables to using ufw. I have run the following commands to set up a basic firewall with ufw. ufw default deny ufw allow ssh ufw allow http ufw allow https ufw allow imap ufw allow imaps ufw allow smtp I then used telnet to check all of the mail ports. But Port 993 isnt working even though ufw says both 993 and 993/tcp are open. What am I missing?

    Read the article

  • efficient collision detection - tile based html5/javascript game

    - by Tom Burman
    Im building a basic rpg game and onto collisions/pickups etc now. Its tile based and im using html5 and javascript. i use a 2d array to create my tilemap. Im currently using a switch statement for whatever key has been pressed to move the player, inside the switch statement. I have if statements to stop the player going off the edge of the map and viewport and also if they player is about to land on a tile with tileID 3 then the player stops. Here is the statement: canvas.addEventListener('keydown', function(e) { console.log(e); var key = null; switch (e.which) { case 37: // Left if (playerX > 0) { playerX--; } if(board[playerX][playerY] == 3){ playerX++; } break; case 38: // Up if (playerY > 0) playerY--; if(board[playerX][playerY] == 3){ playerY++; } break; case 39: // Right if (playerX < worldWidth) { playerX++; } if(board[playerX][playerY] == 3){ playerX--; } break; case 40: // Down if (playerY < worldHeight) playerY++; if(board[playerX][playerY] == 3){ playerY--; } break; } viewX = playerX - Math.floor(0.5 * viewWidth); if (viewX < 0) viewX = 0; if (viewX+viewWidth > worldWidth) viewX = worldWidth - viewWidth; viewY = playerY - Math.floor(0.5 * viewHeight); if (viewY < 0) viewY = 0; if (viewY+viewHeight > worldHeight) viewY = worldHeight - viewHeight; }, false); My question is, is there a more efficient way of handling collisions, then loads of if statements for each key? The reason i ask is because i plan on having many items that the player will need to be able to pickup or not walk through like walls cliffs etc. Thanks for your time and help Tom

    Read the article

  • How to execute a Ruby file in Java, capable of calling functions from the Java program and receiving primitive-type results?

    - by Omega
    I do not fully understand what am I asking (lol!), well, in the sense of if it is even possible, that is. If it isn't, sorry. Suppose I have a Java program. It has a Main and a JavaCalculator class. JavaCalculator has some basic functions like public int sum(int a,int b) { return a + b } Now suppose I have a ruby file. Called MyProgram.rb. MyProgram.rb may contain anything you could expect from a ruby program. Let us assume it contains the following: class RubyMain def initialize print "The sum of 5 with 3 is #{sum(5,3)}" end def sum(a,b) # <---------- Something will happen here end end rubyMain = RubyMain.new Good. Now then, you might already suspect what I want to do: I want to run my Java program I want it to execute the Ruby file MyProgram.rb When the Ruby program executes, it will create an instance of JavaCalculator, execute the sum function it has, get the value, and then print it. The ruby file has been executed successfully. The Java program closes. Note: The "create an instance of JavaCalculator" is not entirely necessary. I would be satisfied with just running a sum function from, say, the Main class. My question: is such possible? Can I run a Java program which internally executes a Ruby file which is capable of commanding the Java program to do certain things and get results? In the above example, the Ruby file asks the Java program to do a sum for it and give the result. This may sound ridiculous. I am new in this kind of thing (if it is possible, that is). WHY AM I ASKING THIS? I have a Java program, which is some kind of game engine. However, my target audience is a bunch of Ruby coders. I don't want to have them learn Java at all. So I figured that perhaps the Java program could simply offer the functionality (capacity to create windows, display sprites, play sounds...) and then, my audience can simply code with Ruby the logic, which basically justs asks my Java engine to do things like displaying sprites or playing sounds. That's when I though about asking this.

    Read the article

  • Will HTML5 make Silverlight redundant?

    - by Laila
    One of the great features of Adobe AIR v2 that was launched this month was its support for some of the 2008 draft of HTML5. The HTML5 specification was started in 2004, but the full spec will probably not be approved by W3C until around 2022. One might have thought that it would take years yet from now to reach the point where any browsers were remotely HTML5-compliant, but enough of HTML5 is published and agreed to make a lot of it possible, and Safari and Adobe have got there thanks to Apple's open-source WebKit. The race for HTML 5 has been fuelled by the demand by Apple and Google for advanced graphics, typography, animations and transitions without having to rely on third party browser plug-ins such as Adobe Flash or Silverlight. There is good reason for this haste: Flash doesn't support touch-devices and has been slow in supporting hardware video decoders such as H.264. There is a strong requirement to do all that Flash can do in an open-standards way. Those with proprietary solutions remain sniffy. In AIR 2, Adobe pointedly disables the HTML5 and tags that allow basic playing of media content, saying that the specification is not final and there is still no standard for the supported formats, and adding that Safari implements a 'disjoint set' of codecs. Microsoft also has little interest in HTML 5 as it has so much invested in Silverlight. Google stands to gain by the Adobe AIR for Android as it will allow a lot of applications to be migrated easily to the platform, so sees Apple's war on Flash as a way of gaining market share. Why do we care? It is because HTML5/CSS3 provides facilities much far beyond HTML4, bring the reality of browser-based applications a lot closer. Probably most generally useful is the advanced typography: Safari and AIR already both support a way of reflowing text in a container across an arbitrary number of columns; Page-specific fonts can also be specified. Then there is 2D drawing, video, transitions, local storage, AJAX navigation and mutable DOM prototypes. HTML5 is likely to provide base functionality that is required but it is too early to be certain that it will render Flash, Silverlight or JavaFX obsolete. In the meantime, Adobe Air provides the best vehicle for developing HTML5/CSS3 applications without a twinge of worry about browser incompatibilities. Cheers, Laila

    Read the article

  • ArchBeat Facebook Friday: Top 10 Shared Links - May 23-29, 2014

    - by OTN ArchBeat
    Among the 5,144 fans of the OTN ArchBeat Facebook Page the following Top 10 items were the most popular over the last seven days, May 23-29, 2014. GlassFish/Java EE Community Open Forum Today! | Reza Rahman Have questions about Glassfish? Java EE/GlassFish evangelist Reza Rahman has answers, and you can pick his brain tomorrow during an online forum organized by the London Glassfish User Group and C2B2. The event is free, but you must register in order to participate. Click the link for more information. Twitter Tuesday - Top 10 @ArchBeat Tweets - May 20-26, 2014 The top 10 @OTNArchBeat tweets for the week of May 20-26, 2014. Topics covered include ADF, Cloud, GoldenGate, KScope14, OBIEE, ODI, WebLogic, WebCenter, and more. FrameworkFolders Support has come to Oracle WebCenter Portal | JayJay Zheng Interested in working with Framework Folders in Oracle WebCenter Portal? Oracle ACE JayJay Zheng reviews the essentials. Video: Programming Best Practices - ADF Business Components | Frank Nimphius Frank Nimphius discusses best practices and recommendations for ADF Business Components in the latest video from ADF Architecture TV. Video: Kscope 2014 Preview: Data Modeling and Moving Meditation with Kent Graziano For your mind and your body! Oracle ACE Director Kent Graziano previews his Kscope 2014 data modeling presentations and the early morning Chi Gung sessions he will once again lead for Kscope attendees. OAG and OES Integration for Web API Security: skin and guts | Andre Correa A-Team architect Andre Correa's post examines a strategy for web API security that uses OAG (Oracle API Gateway) and OES (Oracle Entitlements Server). Getting Started with Coherence*Web in WebLogic Server 12.1.2 | Tim Middleton Solution architect Tim Middleton shows you how to configure Coherence*Web in WebLogic Server 12.1.2 and deploy a basic web application. SOA and Business Processes: You are the Process! Part of the 13-part "Industrial SOA" article series, this article looks at best practices for modeling and managing effective business processes. Authentication in Oracle Identity Federation/ IdP | Damien Carru Damien Carru discuss authentication when OIF acts as an IdP and how the server can be configured to use specific OAM Authentication Schemes to challenge the user. Caveats on Using WebLogic Server with JDK7 | JayJay Zheng Quick tech tips from Oracle ACE JayJay Zheng.

    Read the article

  • Writing or extending existing emacs packages: is it worth or should I move to Netbeans/Eclipse?

    - by Andrea
    I'm finishing my master degree course in CS and I've almost become addicted to Emacs. I've used it to write in C, Latex, Java, JSP,XML, CommonLisp, Ada and other languages no other editor supported, like AMPL. I'd like to improve the packages I've been using the most or create new ones, but, in practice, I find that the implementation of Emacs leaves a lot to be desired. There are a lot of poorly-featured/poorly-maintained packages with either overlapping functionalities or obscure incompatibilities, and Elisp just seems to foster the situation by lacking the common features modern lisps have. In contrast Eclipse and Netbeans are actively improved and it does seem they can be effective for non-mainstream languages. I tried Hibachi for Ada in Eclipse and it worked well, there's CUPS for Lisp in Eclipse and LambdaBeans built using NetBeans components. On the other hand those plugins seem to be less active than their Emacs' counterparts, for example Hibachi was archived last year. What's your opinion on this? Which editor should I write extension for? EDIT: To answer Larry Coleman (see comment below): I like Emacs as a user because it is efficient both for me and the computer I'm using. It's fast and the textual interface (i.e. minibuffer) allows for quick interaction. It's solid and packages are usually small and easy to manage. If I need to correct or remove something I usually just have to change a row in my .emacs or an elisp file, or delete a directory. Eclipse plugins rely on a more complicated process that screwed my Eclipse configuration a couple of times, forcing me to do a clean reinstall. Emacs works as long as I use the basic packages. If I need something more complicated the situation gets pretty hairy. As a "power user" I think that the best I can hope for is to write a severely crippled version of the extensions I'd actually like to have; in other words, that it's not worth the trouble. I'd like to write extensions for the things I'd like to have automated in Emacs, for example project support with automated tag-table update on file writing. There are a few projects on this that lack integration, documentation, extensibility and so forth. The best one is probably CEDET, for which I believe the Greenspun's 10th rule can be applied. EDIT: To comment Larry Coleman's answer I'm pretty sure I can pick elisp programming but the extensions I have in mind don't exist yet despite their relative simplicity and the effort more knowledgeable people poured into related projects.This makes me wonder whether it is so because of the way emacs is developed, i.e. people tend to write their own little extensions without coordination, or its implementation, its extension language not being able to keep up with the growing complexity.

    Read the article

  • Ubuntu hangs on booting up after a update

    - by alFReD NSH
    I've made a clean install yesterday, for the first time restarted, everything went good and then after I updated packages and copied my old home directory to replace the new one, when I restarted it hung when it was booting. I tried reinstalling again and doing the same thing, but again same thing happened. Here's what I see, before when the Ubuntu logo with the five dots is shown: Then after that, 3 or 4 of the dots will load and hangs there. If I press arrow up before that, this will be shown I started my laptop again today(the pictures are for the day before) and after that, boot up with live CD and got the logs. dmesg: http://pastebin.com/aVxV7BQF syslog: http://pastebin.com/4E2BrRUK And some info: alfred@alFitop:~$ uname -a Linux alFitop 3.2.0-24-generic #39-Ubuntu SMP Mon May 21 16:52:17 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux lshw: http://pastebin.com/AZbKJmsT sources.list : http://pastebin.com/2HazmuyV My problem is a bit similar to here: http://ubuntuforums.org/showthread.php?t=1918271 Though I didn't change my x.org config. Only changed home directory and updated packages. I've tried memtest and fschk, both passed. In the recovery mode boot option, I've also realized that same things happen in failsafe graphical mode. But when I go into the network mode, I can boot up my system, but of course same the graphics are just basic. Adding blacklist intel_ips to /etc/modprobe.d/blacklist.conf solves the first message, but still I get the broken pipe and CPU stack traces. The current kernel version is 3.2.0-25, I've tried booting up in the 3.2.0-23(the one the installer came with, but same results.) Also uninstalled apparmor, didn't help. I've installed Ubuntu again, this time without copying the home directory, also same result. --- UPDATE --- This problem was solved before with removing backports, but its back again! I've updated my laptop last night and the problem came back. It's definitely one of these packages. My /var/log/apt/term.log and /var/log/apt/history.log. I'm almost having the same situation. --- UPDATE --- I realized this also have happened on times that I have updated(haven't restarted after it) and my computer power has been cut off and its shutdown due to lack of power. And I realized if I just do as I answered but not in somewhere without GUI(networking mode has the GUI) it wouldn't work.

    Read the article

  • Need help with xorg.conf for dual Radeon HD6450 video cards with 4 monitors

    - by Eriks Goodwin-Pfister
    I am running 64-bit Ubuntu 13.10 with Unity and have dual (2) Radeon HD6450 video cards and 4 Hanns-G HL273 monitors. Each Radeon card is driving one monitor via DVI and the other via VGA. I am running the proprietary video drivers from AMD's web site: "amd-catalyst-13.11-beta V9.4-linux-x86.x86_64.run" I tried to use "amd-catalyst-13.12-linux-x86.x86_64.run" but could not get that newer version to install. What I need help with is how to "correct" my xorg.conf file and any other needed instructions to get all four of my monitors to work as a continuous desktop that allows me to drag things from one monitor to the next, etc. When I tried to use the default open source drivers that came in Ubuntu 13.10, only three of the monitors would work. Now that I am running the proprietary ones, all four monitors come on and I can move my mouse from one end to the other--but only the right-most monitor displays my desktop and allows me to "do anything". Any time I move my mouse to any of the other three monitors (which display all-white), it turns into an "X" and does not do anything else but move. Enabling xinerama makes all four displays go all-black after login. I do have amdcccle installed, but it does not seem to have the ability to handle my particular configuration. My Current xorg.conf: Section "ServerLayout" Identifier "Basic Layout" Screen 0 "Screen1" 5760 0 Screen 1 "Screen0" 0 0 Screen 2 "Screen2" 3840 0 Screen 3 "Screen3" 1920 0 EndSection Section "Module" EndSection Section "Monitor" Identifier "0-DFP2" Option "VendorName" "ATI Proprietary Driver" Option "ModelName" "Generic Autodetecting Monitor" Option "DPMS" "true" Option "PreferredMode" "1920x1080" Option "TargetRefresh" "60" Option "Position" "0 0" Option "Rotate" "normal" Option "Disable" "false" EndSection Section "Monitor" Identifier "0-CRT1" Option "VendorName" "ATI Proprietary Driver" Option "ModelName" "Generic Autodetecting Monitor" Option "DPMS" "true" Option "PreferredMode" "1920x1080" Option "TargetRefresh" "60" Option "Position" "0 0" Option "Rotate" "normal" Option "Disable" "false" EndSection Section "Monitor" Identifier "1-DFP2" Option "VendorName" "ATI Proprietary Driver" Option "ModelName" "Generic Autodetecting Monitor" Option "DPMS" "true" Option "PreferredMode" "1920x1080" Option "TargetRefresh" "60" Option "Position" "0 0" Option "Rotate" "normal" Option "Disable" "false" EndSection Section "Monitor" Identifier "1-CRT1" Option "VendorName" "ATI Proprietary Driver" Option "ModelName" "Generic Autodetecting Monitor" Option "DPMS" "true" Option "PreferredMode" "1920x1080" Option "TargetRefresh" "60" Option "Position" "0 0" Option "Rotate" "normal" Option "Disable" "false" EndSection Section "Device" Identifier "Device0" Driver "fglrx" Option "Monitor-CRT1" "1-CRT1" BusID "PCI:1:0:0" EndSection Section "Device" Identifier "Device1" Driver "fglrx" Option "Monitor-DFP2" "0-DFP2" BusID "PCI:4:0:0" EndSection Section "Device" Identifier "Device2" Driver "fglrx" Option "Monitor-DFP2" "1-DFP2" BusID "PCI:1:0:0" Screen 1 EndSection Section "Device" Identifier "Device3" Driver "fglrx" Option "Monitor-CRT1" "0-CRT1" BusID "PCI:4:0:0" Screen 1 EndSection Section "Screen" Identifier "Screen0" Device "Device0" DefaultDepth 24 SubSection "Display" Depth 24 EndSubSection EndSection Section "Screen" Identifier "Screen1" Device "Device1" DefaultDepth 24 SubSection "Display" Depth 24 EndSubSection EndSection Section "Screen" Identifier "Screen2" Device "Device2" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 EndSubSection EndSection Section "Screen" Identifier "Screen3" Device "Device3" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 EndSubSection EndSection

    Read the article

  • Tell me a Story

    - by Geoff N. Hiten
    I recently had a friend ask me to review his resume.  He is a very experienced DBA with excellent skills.  If I had an opening I would have hired him myself.  But not because of the resume.  I know his skill set and skill levels, but there is no way his standard resume can convey that.  A bare bones list of job titles and skills does not set you apart from your competition, nor does it convey whether you have junior or senior level skills and experience.  The solution is to not use the standard format. Tell me a story.  I want to know what you were responsible for.  Describe a tough project and how you saved time/money/personnel on that project.  Link your work activity to business value.  Drop some technical bits in there since we do work in a technical field, but show me what you can do to add value to my business well above what I would pay you.  That will get my attention. The resume exists for one primary and one secondary reason.  The primary reason is to get the interview.  A Resume won’t get you a job, so don’t expect it to.  The secondary reason is to give you and the interviewer a starting point for conversations.  If I can say “Tell me more about when….” and reference an item from your resume, then that is great for both of us.  Of course, you better be able to tell me more, both from the technical and the business side, at least if I am hiring a senior or higher level position.  As for the junior DBAs, go ahead and tell your story too.  Don’t worry about how simple or basic your projects or solutions seem.  It is how you solved the problem and what you learned that I am looking for.  If you learn rapidly and think like a DBA, I can work with that, regardless of you current skill level.

    Read the article

  • Oracle GoldenGate: Knowledge Document Series Post #2

    - by Doug Reid
    0 false 18 pt 18 pt 0 0 false false false /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Times New Roman"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} 0 false 18 pt 18 pt 0 0 false false false /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Times New Roman"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} For our second post in this series the team would like to highlight the knowledge document “How-To: Oracle GoldenGate – Heartbeat Process to Monitor Lag and Performance”. This knowledge document outlines a procedure to reliably measure lag between source and target systems through the use of 'heartbeat' tables. The basic idea is to have a table on the source system that gets updated at a predetermined interval. In your capture processes you would capture the update from the heartbeat table. Using tokens you would add some additional information to the heartbeat record to be able to tell which extract process was capturing the update. This additional information would be used downstream to calculate the real lag time between the source and target systems for a given extract and by checking the last update time on the heartbeat at the target you could also determine if data has stopped flowing between the source and target.  Click here to view the document

    Read the article

  • Is it ok to initialize an RB_ConstraintActor in PostBeginPlay?

    - by Almo
    I have a KActorSpawnable subclass that acts weird. In PostBeginPlay, I initialize an RB_ConstraintActor; the default is not to allow rotation. If I create one in the editor, it's fine, and won't rotate. If I spawn one, it rotates. Here's the class: class QuadForceKActor extends KActorSpawnable placeable; var(Behavior) bool bConstrainRotation; var(Behavior) bool bConstrainX; var(Behavior) bool bConstrainY; var(Behavior) bool bConstrainZ; var RB_ConstraintActor PhysicsConstraintActor; simulated event PostBeginPlay() { Super.PostBeginPlay(); PhysicsConstraintActor = Spawn(class'RB_ConstraintActorSpawnable', self, '', Location, rot(0, 0, 0)); if(bConstrainRotation) { PhysicsConstraintActor.ConstraintSetup.bSwingLimited = true; PhysicsConstraintActor.ConstraintSetup.bTwistLimited = true; } SetLinearConstraints(bConstrainX, bConstrainY, bConstrainZ); PhysicsConstraintActor.InitConstraint(self, None); } function SetLinearConstraints(bool InConstrainX, bool InConstrainY, bool InConstrainZ) { if(InConstrainX) { PhysicsConstraintActor.ConstraintSetup.LinearXSetup.bLimited = 1; } else { PhysicsConstraintActor.ConstraintSetup.LinearXSetup.bLimited = 0; } if(InConstrainY) { PhysicsConstraintActor.ConstraintSetup.LinearYSetup.bLimited = 1; } else { PhysicsConstraintActor.ConstraintSetup.LinearYSetup.bLimited = 0; } if(InConstrainZ) { PhysicsConstraintActor.ConstraintSetup.LinearZSetup.bLimited = 1; } else { PhysicsConstraintActor.ConstraintSetup.LinearZSetup.bLimited = 0; } } DefaultProperties { bConstrainRotation=true bConstrainX=false bConstrainY=false bConstrainZ=false bSafeBaseIfAsleep=false bNoEncroachCheck=false } Here's the code I use to spawn one. It's a subclass of the one above, but it doesn't reference the constraint at all. local QuadForceKCreateBlock BlockActor; BlockActor = spawn(class'QuadForceKCreateBlock', none, 'PowerCreate_Block', BlockLocation(), m_PreparedRotation, , false); BlockActor.SetDuration(m_BlockDuration); BlockActor.StaticMeshComponent.SetNotifyRigidBodyCollision(true); BlockActor.StaticMeshComponent.ScriptRigidBodyCollisionThreshold = 0.001; BlockActor.StaticMeshComponent.SetStaticMesh(m_ValidCreationBlock.StaticMesh); BlockActor.StaticMeshComponent.AddImpulse(m_InitialVelocity); I used to initialize an RB_ConstraintActor where I spawned it from the outside. This worked, which is why I'm pretty sure it has nothing to do with the other code in QuadForceKCreateBlock. I then added the internal constraint in QuadForceKActor for other purposes. When I realized I had two constraints on the CreateBlock doing the same thing, I removed the constraint code from the place where I spawn it. Then it started rotating. Is there a reason I should not be initializing an RB_ConstraintActor in PostBeginPlay? I feel like there's some basic thing about how the engine works that I'm missing.

    Read the article

  • As a tooling/automation developer, can I be making better use of OOP?

    - by Tom Pickles
    My time as a developer (~8 yrs) has been spent creating tooling/automation of one sort or another. The tools I develop usually interface with one or more API's. These API's could be win32, WMI, VMWare, a help-desk application, LDAP, you get the picture. The apps I develop could be just to pull back data and store/report. It could be to provision groups of VM's to create live like mock environments, update a trouble ticket etc. I've been developing in .Net and I'm currently reading into design patterns and trying to think about how I can improve my skills to make better use of and increase my understanding of OOP. For example, I've never used an interface of my own making in anger (which is probably not a good thing), because I honestly cannot identify where using one would benefit later on when modifying my code. My classes are usually very specific and I don't create similar classes with similar properties/methods which could use a common interface (like perhaps a car dealership or shop application might). I generally use an n-tier approach to my apps, having a presentation layer, a business logic/manager layer which interfaces with layer(s) that make calls to the API's I'm working with. My business entities are always just method-less container objects, which I populate with data and pass back and forth between my API interfacing layer using static methods to proxy/validate between the front and the back end. My code by nature of my work, has few common components, at least from what I can see. So I'm struggling to see how I can better make use of OOP design and perhaps reusable patterns. Am I right to be concerned that I could be being smarter about how I work, or is what I'm doing now right for my line of work? Or, am I missing something fundamental in OOP? EDIT: Here is some basic code to show how my mgr and api facing layers work. I use static classes as they do not persist any data, only facilitate moving it between layers. public static class MgrClass { public static bool PowerOnVM(string VMName) { // Perform logic to validate or apply biz logic // call APIClass to do the work return APIClass.PowerOnVM(VMName); } } public static class APIClass { public static bool PowerOnVM(string VMName) { // Calls to 3rd party API to power on a virtual machine // returns true or false if was successful for example } }

    Read the article

  • Confusion about inheritance

    - by Samuel Adam
    I know I might get downvoted for this, but I'm really curious. I was taught that inheritance is a very powerful polymorphism tool, but I can't seem to use it well in real cases. So far, I can only use inheritance when the base class is an abstract class. Examples : If we're talking about Product and Inventory, I quickly assumed that a Product is an Inventory because a Product must be inventorized as well. But a problem occured when user wanted to sell their Inventory item. It just doesn't seem to be right to change an Inventory object to it's subtype (Product), it's almost like trying to convert a parent to it's child. Another case is Customer and Member. It is logical (at least for me) to think that a Member is a Customer with some more privileges. Same problem occurred when user wanted to upgrade an existing Customer to become a Member. A very trivial case is the Employee case. Where Manager, Clerk, etc can be derived from Employee. Still, the same upgrading issue. I tried to use composition instead for some cases, but I really wanted to know if I'm missing something for inheritance solution here. My composition solution for those cases : Create a reference of Inventory inside a Product. Here I'm making an assumption about that Product and Inventory is talking in a different context. While Product is in the context of sales (price, volume, discount, etc), Inventory is in the context of physical management (stock, movement, etc). Make a reference of Membership instead inside Customer class instead of previous inheritance solution. Therefor upgrading a Customer is only about instantiating the Customer's Membership property. This example is keep being taught in basic programming classes, but I think it's more proper to have those Manager, Clerk, etc derived from an abstract Role class and make it a property in Employee. I found it difficult to find an example of a concrete class deriving from another concrete class. Is there any inheritance solution in which I can solve those cases? Being new in this OOP thing, I really really need a guidance. Thanks!

    Read the article

  • Why would you dual-run an app on Azure and AWS?

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2013/11/10/why-would-you-dual-run-an-app-on-azure-and-aws.aspxI had this question from a viewer of my Pluralsight course, Implementing the Reactive Manifesto with Azure and AWS, and thought I’d publish the response. So why would you dual-run your cloud app by hosting it on Azure and AWS? Sounds like a lot of extra development and management overhead. Well the most compelling reasons are reliability and portability. In 2012 I was working for a client who was making a big investment in the cloud, and at the end of the year we published their first external API for business partners. It was hosted in Azure and used some really nice features to route back into existing on-premise services. We were able to publish a clean, simple API to partners, and hide away the underlying complexity of the internal services while still leveraging them to do all the work. Two days after we went live, we were hit by the Azure SSL certificate expiry outage, and our API was unavailable for the best part of 3 days. Fortunately we had planned a gradual roll-out to partners, so the impact was minimal, but we’d been intending to ramp up quickly, and if the outage had happened a week or two later we would have been in a very bad place. Not least because our app could only run on Azure, we couldn’t package it up for another service without going back and reworking the code. More recently AWS had an issue with a networking device in one of their data centres which caused an outage that took the best part of a day to resolve. In both scenarios the SLAs are worthless, as you’ll get back a small percentage of your cloud expenditure, which is going to be negligible compared to your costs in dealing with the outage. And if your app is built specifically for AWS or Azure then if there’s an extended outage you can’t just deploy it onto a new set of kit from a different supplier. And the chances are pretty good there will be another extended outage, both for Microsoft and for Amazon. But the chances are small that it will happen to both at the same time. So my basic guidance has been: ignore the SLAs, go for better uptime by using two clouds. As soon as you need to scale beyond a single instance, start by scaling out to another cloud. Then scale out to different data centres in both clouds. Then you’ve got dual-cloud, quadruple-datacentre redundancy, so any more scaling you need can be left to the clouds to auto-scale themselves. By running in both clouds, you’ve made your app portable, so in the highly unlikely event that both AWS and Azure go down in multiple regions, you’ll have a deployment package which will let you spin up a new stack on yet another cloud, without having to rework your solution.

    Read the article

  • EBS – ATG Webcast 9/11 - 9/12

    - by cwarticki
    EBS – ATG Webcast in September 2012 EBS – Multiple Language Support (MLS) Agenda :EBS is MLS Ready                                                                                 NLS / MLS Basic ArchitectureNLS / MLS InstallationNLS / MLS Configuration Settings                                                                    TroubleshootingQuestion and AnswersEMEA Session : September 11, 2012 at 09:00 UK / 10:00 CET / 13:30 India / 17:00 Japan / 18:00 Sydney (Australia) Details & Registration : Note 1480084.1 Direct link to register in WebEx US Session : September 12, 2012 at 18:00 UK / 19:00 CET / 10:00 AM Pacific / 11:00 AM Mountain/ 01:00 PM Eastern ·      Details & Registration : Note 1480085.1 ·      Direct link to register in WebEx ·         Schedules, recordings and the Presentations of the Advisor Webcast drove under the EBS Applications Technology area can be found in Note 1186338.1. ·         Current Schedules of Advisor Webcast for all Oracle Products can be found on Note 740966.1 ·         Post Presentation Recordings of the Advisor Webcasts for all Oracle Products can be found on Note 740964.1 If you have any question about the schedules or if you have a suggestion for an Advisor Webcast to be planned in future, please send an E-Mail to Ruediger Ziegler.

    Read the article

  • Have you used the ExecutionValue and ExecValueVariable properties?

    The ExecutionValue execution value property and it’s friend ExecValueVariable are a much undervalued feature of SSIS, and many people I talk to are not even aware of their existence, so I thought I’d try and raise their profile a bit. The ExecutionValue property is defined on the base object Task, so all tasks have it available, but it is up to the task developer to do something useful with it. The basic idea behind it is that it allows the task to return something useful and interesting about what it has performed, in addition to the standard success or failure result. The best example perhaps is the Execute SQL Task which uses the ExecutionValue property to return the number of rows affected by the SQL statement(s). This is a very useful feature, something people often want to capture into a variable, and start using the result set options to do. Unfortunately we cannot read the value of a task property at runtime from within a SSIS package, so the ExecutionValue property on its own is a bit of a let down, but enter the ExecValueVariable and we have the perfect marriage. The ExecValueVariable is another property exposed through the task (TaskHost), which lets us select a SSIS package variable. What happens now is that when the task sets the ExecutionValue, the interesting value is copied into the variable we set on the ExecValueVariable property, and a variable is something we can access and do something with. So put simply if the ExecutionValue property value is of interest, make sure you create yourself a package variable and set the name as the ExecValueVariable. Have  look at the 3 step guide below: 1 Configure your task as normal, for example the Execute SQL Task, which here calls a stored procedure to do some updates. 2 Create variable of a suitable type to match the ExecutionValue, an integer is used to match the result we want to capture, the number of rows. 3 Set the ExecValueVariable for the task, just select the variable we created in step 2. You need to do this in Properties grid for the task (Short-cut key, select the task and press F4) Now when we execute the sample task above, our variable UpdateQueueRowCount will get the number of rows we updated in our Execute SQL Task. I’ve tried to collate a list of tasks that return something useful via the ExecutionValue and ExecValueVariable mechanism, but the documentation isn’t always great. Task ExecutionValue Description Execute SQL Task Returns the number of rows affected by the SQL statement or statements. File System Task Returns the number of successful operations performed by the task. File Watcher Task Returns the full path of the file found Transfer Error Messages Task Returns the number of error messages that have been transferred Transfer Jobs Task Returns the number of jobs that are transferred Transfer Logins Task Returns the number of logins transferred Transfer Master Stored Procedures Task Returns the number of stored procedures transferred Transfer SQL Server Objects Task Returns the number of objects transferred WMI Data Reader Task Returns an object that contains the results of the task. Not exactly clear, but I assume it depends on the WMI query used.

    Read the article

  • C++ and SDL resource management for 2D game

    - by KuruptedMagi
    My first question is about stateManagers. I do not use the singleton pattern (read many random posts with various reasons not to use it), I have gameStateManager which runs the pointer cCurrentGameState-render(), etc. I want to make a transitioning game, this engine should ideally cover both a platformer and a bird's eye RPG (with some recoding, I just mean the base engine), both of which will load different levels and events, such as world map, dungeon, shops, etc. So I then thought, rather then having to store all this data within all the states, I would break the engine into gameStates, and playStates... when gameState reaches gameStatePlay(), gameStatePlay simply runs the usual handleInput, logic, and render for the playStates, just as the low level gameStateManager does. This lets me store all the player data within the base playstate class without storing useless data in the gameStates. Now I have added a seperate mapEditor, which uses editorStates from gameStateEditor. Is this too much usage of the gameState concept? It seems to work pretty well for me, so I was wondering if I am too far off a common implementation of this. My second question is on image resources. I have my sprite class with nothing but static members, mainly loadImage, applySurface, and my screen pointer. I also have a map pairing imageName enums with actual SDL_Surface pointers, and one pairing clipNumber enums with a wrapper class for a vector of clips, so that each reference in the map can have different amounts of clips with different sizes. I thought it would be better to store all these images, and screen within one static body, since 20 different goblins all use the same sprite sheet, and all need to print to the same screen, and of course, this way I do not need to pass my screen reference to every little entity. The imageMap seems to work very well, I can even add the ability to search through the map at creation of entity type to see if a particular image at creation, creating if it doesnt exist, and destroying the image if the last entity that needs it was just destroyed. The vectored clip map however, seems to take too long to initialize, so if i run past the state that initializes them to fast, the game crashes <. Plus, the clip map call is half of this line =P SPRITE::applySurface( cEditorMap.cTiles[x][y].iX, cEditorMap.cTiles[x][y].iY, SPRITE::mImages[ IMAGE_TILEMAP ], SPRITE::screen, SPRITE::mImageClips[IMAGE_TILEMAP]->clips.at( cEditorMap.cTiles[x][y].iTileType ) ); Again, do I have the right idea? I like the imageMap, but am I better off with each entity storing its own clips? My last question is about collision detection. I only grasp the basics, will look at per-pixel and circular soon, but how can I determine which side the collision comes from with just the basic square collision detection, I tried breaking each entity into 4 collision zones, but that just gave me problems with walking through walls and the like <. Also, is per-pixel color collision a good way to decide what collision just occured, or is checking multiple colors for multiple entities too taxing each cycle?

    Read the article

  • What could be the best way to generalize data from Facebook and Twitter?

    - by Sjaak van der Heide
    I am not sure if this is the best subsite to ask this question, but I'm pretty sure it doesn't fit on the normal or facebook SO page... I've been asked to make a general API for connecting to several Social Media platforms (at the moment Facebook and Twitter). I have already realised both of them seperately. Meaning I retrieve the data I need from both Facebook and Twitter and hold the data in it's own dataclass. In my case a list of FacebookTimelineItems and a list of TwitterTimelineItems. now the hard part is taking the parts that are used in both (username, id, message and such) and make 1 general class that is eventually passed on to who/whatever sent the call to my API. these are two pics of the data classes I have: http://imageshack.us/photo/my-images/703/facebookdata.png/ http://imageshack.us/photo/my-images/204/twitterdata.png/ probably not 100% correct but it gives an idea what it looks like. Now I've been having several idea about how to go about and generalize the two, which is harder then I thought at first. Create an interface (TimelineItem) and let the other classes extend that one. this way I'll always be sure I have a class that contains at least the basic info I need. downside is that deserializing the JSON seems to be a nightmare. Use the two dataclasses I have and combine them into a new class afterwards, then pass that one back to whoever requested it. This would probably work but I get the idea it's not the best way to tackle this problem, and is pretty dodgy IF I get it working. Or, in case of the other two being nearly impossible. Keep the two seperated in the front end, and go sit in the corner crying because I've just figured out you can't lump together facebook and twitter... Note: I don't have to make the front end part (view), I just make sure the Model is nicely filled with data :) I hope I placed this in the right section, if I didn't I apologise and would like to know where I should go with my question. Thanks in advance for any replied/ideas/opinions on this.

    Read the article

  • Visual Studio 2010 Launch Events

    - by Jim Duffy
    Don’t miss out on the opportunity to learn about the new features in Visual Studio 2010. Check out the MSDN Events page and find out when the talented folks of the Developer & Evangelism group will be visiting your city to prove to you that /*Life Runs On Code*/. I’ll be attending the Raleigh event June 2, 2010 from 1:00 - 5:00 PM. North Carolina State University, Jane S. McKimmon Conference Center 1101 Gorman St Raleigh North Carolina 27606 United States From the Raleigh Event page: Event Overview Learn about the rich application platforms that Microsoft® Visual Studio® 2010 supports, including Windows® 7, the Web, SharePoint®, Windows Azure™, SQL®, and Windows® Phone 7 Series. From tighter tester and dev collaboration to new ALM tools, there’s a lot that’s new. Here’s what you can expect: Windows Development with Visual Studio 2010 Visual Studio has always been the best way to build compelling visual solutions for Windows. Visual Studio 2010 continues this trend with great new tooling support for Silverlight 4, WPF, and native development. In this demo heavy session, you’ll see how you can build rich Windows applications with Silverlight 4 using new trusted application features including out-of-browser execution, saving to the file system, and even COM Automation. You’ll also see how you can use the new Task Parallel Library from within a WPF application to take advantage of all those cores in today’s modern computers. Web and Cloud Development with Visual Studio 2010 If you build solutions for the web, then this session is for you. Come see how your existing skills move forward with Visual Studio 2010 both for in-house ASP.NET development and the new frontier of the Cloud. In this session, you’ll see improved designers, new HTML and JavaScript snippets, Web Forms enhancements, and how you can quickly build great web sites using Dynamic Data. You’ll see the changes made to testable web sites with MVC 2.0 and how we’ve integrated JQuery support into the platform. You’ll then see how easy it is to leverage your existing code and move to the cloud with Windows Azure. Windows Phone 7 Developer Tools and Platform Overview This session provides an overview of Visual Studio® 2010 for Windows Phone. Learn about the powerful capabilities of this new application platform and the developer tools experience including basic IDE usage, debugging, packaging, and deployment. This session also shows how you can use Microsoft Expression® Blend™ for Windows Phone to build great Silverlight applications. Have a day. :-|

    Read the article

< Previous Page | 316 317 318 319 320 321 322 323 324 325 326 327  | Next Page >