Search Results

Search found 17816 results on 713 pages for 'variable names'.

Page 638/713 | < Previous Page | 634 635 636 637 638 639 640 641 642 643 644 645  | Next Page >

  • Node.js Lockstep Multiplayer Architecture

    - by Wakaka
    Background I'm using the lockstep model for a multiplayer Node.js/Socket.IO game in a client-server architecture. User input (mouse or keypress) is parsed into commands like 'attack' and 'move' on the client, which are sent to the server and scheduled to be executed on a certain tick. This is in contrast to sending state data to clients, which I don't wish to use due to bandwidth issues. Each tick, the server will send the list of commands on that tick (possibly empty) to each client. The server and all clients will then process the commands and simulate that tick in exactly the same way. With Node.js this is actually quite simple due to possibility of code sharing between server and client. I'll just put the deterministic simulator in the /shared folder which can be run by both server and client. The server simulation is required so that there is an authoritative version of the simulation which clients cannot alter. Problem Now, the game has many entity classes, like Unit, Item, Tree etc. Entities are created in the simulator. However, for each class, it has some methods that are shared and some that are client-specific. For instance, the Unit class has addHp method which is shared. It also has methods like getSprite (gets the image of the entity), isVisible (checks if unit can be seen by the client), onDeathInClient (does a bunch of stuff when it dies only on the client like adding announcements) and isMyUnit (quick function to check if the client owns the unit). Up till now, I have been piling all the client functions into the shared Unit class, and adding a this.game.isServer() check when necessary. For instance, when the unit dies, it will call if (!this.game.isServer()) { this.onDeathInClient(); }. This approach has worked pretty fine so far, in terms of functionality. But as the codebase grew bigger, this style of coding seems a little strange. Firstly, the client code is clearly not shared, and yet is placed under the /shared folder. Secondly, client-specific variables for each entity are also instantiated on the server entity (like unit.sprite) and can run into problems when the server cannot instantiate the variable (it doesn't have Image class like on browsers). So my question is, is there a better way to organize the client code, or is this a common way of doing things for lockstep multiplayer games? I can think of a possible workaround, but it does have its own problems. Possible workaround (with problems) I could use Javascript mixins that are only added when in a browser. Thus, in the /shared/unit.js file in the /shared folder, I would have this code at the end: if (typeof exports !== 'undefined') module.exports = Unit; else mixin(Unit, LocalUnit); Then I would have /client/localunit.js store an object LocalUnit of client-side methods for Unit. Now, I already have a publish-subscribe system in place for events in the simulator. To remove the this.game.isServer() checks, I could publish entity-specific events whenever I want the client to do something. For instance, I would do this.publish('Death') in /shared/unit.js and do this.subscribe('Death', this.onDeathInClient) in /client/localunit.js. But this would make the simulator's event listeners list on the server and the client different. Now if I want to clear all subscribed events only from the shared simulator, I can't. Of course, it is possible to create two event subscription systems - one client-specific and one shared - but now the publish() method would have to do if (!this.game.isServer()) { this.publishOnClient(event); }. All in all, the workaround off the top of my head seems pretty complicated for something as simple as separating the client and shared code. Thus, I wonder if there is an established and simpler method for better code organization, hopefully specific to Node.js games.

    Read the article

  • Managing game state / 'what to update' within an XNA game 'screen'

    - by codinghands
    Note - having read through other GDev questions suggested when writing this question I'm confident this isn't a dupe. Of course, it's 3am and I'm likely wrong, so please mod as such if so. I'm trying to figure out how best to manage state within my game screens - please bare with me though! At the moment I'm using a heavily modified version of the fantastic game state management example on the XNA site available here. This is working perfectly for my 'Screens' - 'IntroScreen' with some shiny logos, 'TitleScreen' and a 'MenuScreen' stacked on top for the title and menu, 'PlayScreen' for the actual gameplay, etc. Each screen has the a bunch of sprites, and an 'Update' and 'Draw', managed by a 'ScreenManager'. In addition to the above, and as suggested as an answer to my other question here, most screens have a 'GameProcessQueue' class full of 'GameProcess'es which lets me do just about anything (animations, youbetcha!), in any order, in sequence or parallel. Why mention all this? When I talk about managing game state I'm thinking more for complex scenarios within a 'Screen'. 'TitleScreen', 'MenuScreen' and the like are all relatively simple. 'Play Screen' less so. How do people manage the different 'states' within the screen (or whatever you call it) that 'does' gameplay? (for me, the 'PlayScreen') I've thought about the following: Enum of different states in the Screen, 'activeState' enum-type variable, switching on the enum in the Screen Update() loop to determine what Screen Update 'sub'-function is called. I can see this getting hairy pretty fast though as screens get more complex and with the 'PlayScreen' becoming a behemoth mega-class. 'State' class with Update loop - a Screen can have any number of 'States', 1+ of which are 'active'. Screen update loop calls update on all active states. States themselves know which screen they belong to, and may even belong to a 'StateManager' which handles transitioning from one state to the next. Once a state is over it's removed from the ScreenState list. The Screen doesn't need a bunch of GameProcessQueues, each State has its own. Abstract Screen further to be more flexible - I can see the similarities between what I've got (game 'Screens' handled by a ScreenManager) and what I want (states within a screen, and a mechanism to manage them). However at the moment I see 'Screens' as high level and very distinct ('PlayScreen' with baddies != 'MenuScreen' with 4 words and event handlers), where as my proposed 'States' are more intrinsically tied to a specific screen with complex requirements. I think. This is for a turn-based board game, so it's easier to define things as a discrete series of steps (IntroAnimation - P1Turn - P2Turn - P1Turn ... - GameOver - .... Obviously with an open-world RPG things are very different, but any advice in this scenario is appreciated. If I'm just going OOP-crazy please say so. Similarly I'm concious there's a huge amount on this site re: state management. But as my first 'serious' game after a couple of false starts I'd like to get this right, and would rather be harassed and modded down than never ask :)

    Read the article

  • Best Practices - Core allocation

    - by jsavit
    This post is one of a series of "best practices" notes for Oracle VM Server for SPARC (also called Logical Domains) Introduction SPARC T-series servers currently have up to 4 CPU sockets, each of which has up to 8 or (on SPARC T3) 16 CPU cores, while each CPU core has 8 threads, for a maximum of 512 dispatchable CPUs. The defining feature of Oracle VM Server for SPARC is that each domain is assigned CPU threads or cores for its exclusive use. This avoids the overhead of software-based time-slicing and emulation (or binary rewriting) of system state-changing privileged instructions used in traditional hypervisors. To create a domain, administrators specify either the number of CPU threads or cores that the domain will own, as well as its memory and I/O resources. When CPU resources are assigned at the individual thread level, the logical domains constraint manager attempts to assign threads from the same cores to a domain, and avoid "split core" situations where the same CPU core is used by multiple domains. Sometimes this is unavoidable, especially when domains are allocated and deallocated CPUs in small increments. Why split cores can matter Split core allocations can silenty reduce performance because multiple domains with different address spaces and memory contents are sharing the core's Level 1 cache (L1$). This is called false cache sharing since even identical memory addresses from different domains must point to different locations in RAM. The effect of this is increased contention for the cache, and higher memory latency for each domain using that core. The degree of performance impact can be widely variable. For applications with very small memory working sets, and with I/O bound or low-CPU utilization workloads, it may not matter at all: all machines wait for work at the same speed. If the domains have substantial workloads, or are critical to performance then this can have an important impact: This blog entry was inspired by a customer issue in which one CPU core was split among 3 domains, one of which was the control and service domain. The reported problem was increased I/O latency in guest domains, but the root cause might be higher latency servicing the I/O requests due to the control domain being slowed down. What to do about it Split core situations are easily avoided. In most cases the logical domain constraint manager will avoid it without any administrative action, but it can be entirely prevented by doing one of the several actions: Assign virtual CPUs in multiples of 8 - the number of threads per core. For example: ldm set-vcpu 8 mydomain or ldm add-vcpu 24 mydomain. Each domain will then be allocated on a core boundary. Use the whole core constraint when assigning CPU resources. This allocates CPUs in increments of entire cores instead of virtual CPU threads. The equivalent of the above commands would be ldm set-core 1 mydomain or ldm add-core 3 mydomain. Older syntax does the same thing by adding the -c flag to the add-vcpu, rm-vcpu and set-vcpu commands, but the new syntax is recommended. When whole core allocation is used an attempt to add cores to a domain fails if there aren't enough completely empty cores to satisfy the request. See https://blogs.oracle.com/sharakan/entry/oracle_vm_server_for_sparc4 for an excellent article on this topic by Eric Sharakan. Don't obsess: - if the workloads have minimal CPU requirements and don't need anywhere near a full CPU core, then don't worry about it. If you have low utilization workloads being consolidated from older machines onto a current T-series, then there's no need to worry about this or to assign an entire core to domains that will never use that much capacity. In any case, make sure the most important domains have their own CPU cores, in particular the control domain and any I/O or service domain, and of course any important guests. Summary Split core CPU allocation to domains can potentially have an impact on performance, but the logical domains manager tends to prevent this situation, and it can be completely and simply avoided by allocating virtual CPUs on core boundaries.

    Read the article

  • Expanding on requestaudit - Tracing who is doing what...and for how long

    - by Kyle Hatlestad
    One of the most helpful tracing sections in WebCenter Content (and one that is on by default) is the requestaudit tracing.  This tracing section summarizes the top service requests happening in the server along with how they are performing.  By default, it has 2 different rotations.  One happens every 2 minutes (listing up to 5 services) and another happens every 60 minutes (listing up to 20 services).  These traces provide the total time for all the requests against that service along with the number of requests and its average request time.  This information can provide a good start in possibly troubleshooting performance issues or tracking a particular issue.   >requestaudit/6 12.10 16:48:00.493 Audit Request Monitor !csMonitorTotalRequests,47,1,0.39009329676628113,0.21034042537212372,1>requestaudit/6 12.10 16:48:00.509 Audit Request Monitor Request Audit Report over the last 120 Seconds for server wcc-base_4444****requestaudit/6 12.10 16:48:00.509 Audit Request Monitor -Num Requests 47 Errors 1 Reqs/sec. 0.39009329676628113 Avg. Latency (secs) 0.21034042537212372 Max Thread Count 1requestaudit/6 12.10 16:48:00.509 Audit Request Monitor 1 Service FLD_BROWSE Total Elapsed Time (secs) 3.5320000648498535 Num requests 10 Num errors 0 Avg. Latency (secs) 0.3531999886035919 requestaudit/6 12.10 16:48:00.509 Audit Request Monitor 2 Service GET_SEARCH_RESULTS Total Elapsed Time (secs) 2.694999933242798 Num requests 6 Num errors 0 Avg. Latency (secs) 0.4491666555404663requestaudit/6 12.10 16:48:00.509 Audit Request Monitor 3 Service GET_DOC_PAGE Total Elapsed Time (secs) 1.8839999437332153 Num requests 5 Num errors 1 Avg. Latency (secs) 0.376800000667572requestaudit/6 12.10 16:48:00.509 Audit Request Monitor 4 Service DOC_INFO Total Elapsed Time (secs) 0.4620000123977661 Num requests 3 Num errors 0 Avg. Latency (secs) 0.15399999916553497requestaudit/6 12.10 16:48:00.509 Audit Request Monitor 5 Service GET_PERSONALIZED_JAVASCRIPT Total Elapsed Time (secs) 0.4099999964237213 Num requests 8 Num errors 0 Avg. Latency (secs) 0.051249999552965164requestaudit/6 12.10 16:48:00.509 Audit Request Monitor ****End Audit Report***** To change the default rotation or size of output, these can be set as configuration variables for the server: RequestAuditIntervalSeconds1 – Used for the shorter of the two summary intervals (default is 120 seconds)RequestAuditIntervalSeconds2 – Used for the longer of the two summary intervals (default is 3600 seconds)RequestAuditListDepth1 – Number of services listed for the first request audit summary interval (default is 5)RequestAuditListDepth2 – Number of services listed for the second request audit summary interval (default is 20) If you want to get more granular, you can enable 'Full Verbose Tracing' from the System Audit Information page and now you will get an audit entry for each and every service request.  >requestaudit/6 12.10 16:58:35.431 IdcServer-68 GET_USER_INFO [dUser=bob][StatusMessage=You are logged in as 'bob'.] 0.08765099942684174(secs) What's nice is it reports who executed the service and how long that particular request took.  In some cases, depending on the service, additional information will be added to the tracing relevant to that  service. >requestaudit/6 12.10 17:00:44.727 IdcServer-81 GET_SEARCH_RESULTS [dUser=bob][QueryText=%28+dDocType+%3cmatches%3e+%60Document%60+%29][StatusCode=0][StatusMessage=Success] 0.4696030020713806(secs) You can even go into more detail and insert any additional data into the tracing.  You simply need to add this configuration variable with a comma separated list of variables from local data to insert. RequestAuditAdditionalVerboseFieldsList=TotalRows,path In this case, for any search results, the number of items the user found is traced: >requestaudit/6 12.10 17:15:28.665 IdcServer-36 GET_SEARCH_RESULTS [TotalRows=224][dUser=bob][QueryText=%28+dDocType+%3cmatches%3e+%60Application%60+%29][Sta... I also recently ran into the case where services were being called from a client through RIDC.  All of the services were being executed as the same user, but they wanted to correlate the requests coming from the client to the ones being executed on the server.  So what we did was add a new field to the request audit list: RequestAuditAdditionalVerboseFieldsList=ClientToken And then in the RIDC client, ClientToken was added to the binder along with a unique value that could be traced for that request.  Now they had a way of tracing on both ends and identifying exactly which client request resulted in which request on the server.

    Read the article

  • Oracle at HR Tech: What a Difference a Year Makes

    - by Natalia Rachelson
    Last week, I had the privilege of attending the famous HR Technology Conference (HR Tech) in my new hometown of Chicago. This annual event, which draws the who of who in the world of HR technology, was by far the biggest.  It wasn't just the highest level of attendance that was mind blowing, but also the amazing quality of attendees. Kudos go to the organizers, especially Bill Kutik for pulling together such a phenomenal conference. Conference highlights included Naomi Bloom's (http://infullbloom.us) Masters Panel and Mark Hurd's General Session on the last day of the conference. Naomi managed to do the seemingly impossible -- get all of the industry heavyweights and fierce competitors to travel to Chicago for her panel. Here are the executives she hosted: Our own Steve Miranda Sanjay Poonen, President Global Solutions, SAP Stan Swete, CTO, Workday Mike Capone, VP for Product Development and CIO, ADP John Wookey, EVP, Social Applications, Salesforce.com Adam Rogers, CTO, Ultimate Software       I bet you think "WOW" when you look at these names. Just this panel by itself would have been enough of a draw for any tech conference, so Naomi and Bill really scored. TechTarget published a great review of the conference here.  And here are a few highlights from Steve. "Steve Miranda, EVP Apps Dev Oracle, said delivering software in the cloud helps vendors shape their products to customer needs more efficiently. "As vendors, we're able to improve the software faster," he said. "We can see in real time what customers are using and not using." Miranda underscored Oracle's commitment to socializing its HCM platform,and named recruiting as an area where social has had a significant impact. "We want to make social a part of the fabric, not a separate piece," he said. "Already, if you're doing recruiting without social, it probably doesn't make any sense."" Having Mark Hurd at the conference was another real treat and everyone took notice.  The Business of HR publication covered Mark's participation at HR Tech and the full article is available here. Here is what Business of HR had to say: "In truth, the story of Oracle today is a story similar to many of the current and potential customers they faced at the conference this week. Their business is changing and growing. They've dealt with acquisitions of their own and their competitors continue to nip at their heels. They are dealing with growth (and yes, they are hiring in case you're interested). They have concerns about talent as well. If Oracle feels as strongly about their products as they seem to be, they will be getting their co-president in front of a lot more groups of current and potential customers like they did at the HR Technology Conference this year. And here's hoping this is one executive who won't stop talking about the importance of talent just because he isn't at the HR tech conference anymore." Natalia RachelsonSenior Director, Oracle Applications

    Read the article

  • Enterprise Manager in EPM 11.1.2.x...a game of hide and seek!

    - by THE
    Normal 0 21 false false false DE X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} guest article: Maurice Bauhahn: Users of Oracle Hyperion Enterprise Performance Management 11.1.2.0 and 11.1.2.1 may puzzle why the URL http://<servername>:7001/em may not conjure up Enterprise Manager Fusion Middleware Control. This powerful tool has been installed by default...but WebLogic may not have been 'Extended' to allow you to call it up (we are hopeful this ‘Extend’  step will not be needed with 11.1.2.2). The explanation is on pages 425 and following of the following document: http://www.oracle.com/technetwork/middleware/bi-foundation/epm-tips-issues-1-72-427329.pdf A close look at the screen dumps in that section reveals a somewhat scary prospect, however: the non-AdminServer servlets had all failed (see the red down-arrow icons to the right of their names) after the configuration! Of course you would want to avoid that scenario! A rephrasing of the instructions might help: Ensure the WebLogic AdminServer is not running (in a default scenario that would mean port 7001 is not active). Ensure you have logged into the computer as the installing owner of EPM. Since Enterprise Manager uses a LOT of resources, be sure that there is adequate free RAM to accommodate the added load. On the machine where WebLogic AdminServer is set up (typically the Foundation Services machine), run \Oracle\Middleware\wlserver_10.3\common\bin\config (config.sh on Unix). Select the 'Extend an existing WebLogic domain' option, and click the 'Next' button. Select the domain being used by EPM System. - Typically, the default domain is created under /Oracle/Middleware/user_projects/domains and is named EPMSystem. - Click 'Next'. Under 'Extend my domain automatically to support the following added products' - place a check mark before 'Oracle Enterprise Manager - 11.1.1.0 [oracle_common]' to select it. - Continue accepting the defaults by clicking 'Next' on each page until - on the last page you click 'Extend'. - The system will grind for a few minutes while it configures (deploys?) EM. - Start the AdminServer. Sometimes there is contention in the startup order of the various servlets (resulting in some not coming up). To avoid that problem on Microsoft Windows machines you may start and stop services via the following analogous command line commands to those run on Linux/Unix (these more carefully space out timings of these events): Ensure EPM is up:\Oracle\Middleware\user_projects\epmsystem1\bin\start.bat Ensure WebLogic is up:\Oracle\Middleware\user_projects\domains\EPMSystem\bin\startWebLogic.cmd Shut down WebLogic:\Oracle\Middleware\user_projects\domains\EPMSystem\bin\stopWebLogic.cmd Shut down EPM:\Oracle\Middleware\user_projects\epmsystem1\bin\stop.bat  Now you should be able to more successfully troubleshoot with the EM tool:

    Read the article

  • Changes in Language Punctuation [closed]

    - by Wes Miller
    More social curiosity than actual programming question... (I got shot for posting this on Stack Overflow. They sent me here. At least i hope here is where they meant.) Based on the few responses I got before the content police ran me off Stack Overflow, I should note that I am legally blind and neatness and consistency in programming are my best friends. A thousand years ago when I took my first programming class (Fortran 66) and a mere 500 years ago when I tokk my first C and C++ classes, there were some pretty standard punctuation practices across languages. I saw them in Basic (shudder), PL/1, PL/AS, Rexx even Pascal. Ok, APL2 is not part of this discussion. Each language has its own peculiar punctuation. Pascal's periods, Fortran's comma separated do loops, almost everybody else's semicolons. As I learned it, each language also has KEYWORDS (if, for, do, while, until, etc.) which are set off by whitespace (or the left margin) if, etc. Each language has function, subroutines of whatever they're called. Some built-in some user coded. They were set off by function_name( parameters );. As in sqrt( x ) or rand( y ); Lately, there seems to be a new set of punctuation rules. Especially in c++ where initializers get glued onto the end of variable declarations int x(0); or auto_ptr p(new gizmo); This usually, briefly fools me into thinking someone is declaring a function prototype or using a function as a integer. Then "if" and 'for' seems to have grown parens; if(true) for(;;), etc. Since when did keywords become functions. I realize some people think they ARE functions with iterators as parameters. But if "for" is a function, where did the arg separating commas go? And finally, functions seem to have shed their parens; sqrt (2) select (...) I know, I koow, loosening whitespace rules is good. Keep reading. Question: when did the old ways disappear and this new way come into vogue? Does anyone besides me find it irritating to read and that the information that the placement of punctuation used to convey is gone? I know full well that K&R put the { at the end of the "if" or "for" to save a byte here and there. Can't use that excuse here. Space as an excuse for loss of readability died as HDD space soared past 100 MiB. Your thoughts are solicited. If there is a good reason to do this, I'll gladly learn it and maybe in another 50 years I'll get used to it. Of course it's good that compilers recognize these (IMHO) typos and keep right on going, but just because you CAN code it that way doesn't mean you HAVE to, right?

    Read the article

  • Synchronous Actions

    - by Dan Krasinski-Oracle
    Since the introduction of SMF, svcadm(1M) has had the ability to enable or disable a service instance and wait for that service instance to reach a final state.  With Oracle Solaris 11.2, we’ve expanded the set of administrative actions which can be invoked synchronously. Now all subcommands of svcadm(1M) have synchronous behavior. Let’s take a look at the new usage: Usage: svcadm [-v] [cmd [args ... ]] svcadm enable [-rt] [-s [-T timeout]] <service> ... enable and online service(s) svcadm disable [-t] [-s [-T timeout]] <service> ... disable and offline service(s) svcadm restart [-s [-T timeout]] <service> ... restart specified service(s) svcadm refresh [-s [-T timeout]] <service> ... re-read service configuration svcadm mark [-It] [-s [-T timeout]] <state> <service> ... set maintenance state svcadm clear [-s [-T timeout]] <service> ... clear maintenance state svcadm milestone [-d] [-s [-T timeout]] <milestone> advance to a service milestone svcadm delegate [-s] <restarter> <svc> ... delegate service to a restarter As you can see, each subcommand now has a ‘-s’ flag. That flag tells svcadm(1M) to wait for the subcommand to complete before returning. For enables, that means waiting until the instance is either ‘online’ or in the ‘maintenance’ state. For disable, the instance must reach the ‘disabled’ state. Other subcommands complete when: restart A restart is considered complete once the instance has gone offline after running the ‘stop’ method, and then has either returned to the ‘online’ state or has entered the ‘maintenance’ state. refresh If an instance is in the ‘online’ state, a refresh is considered complete once the ‘refresh’ method for the instance has finished. mark maintenance Marking an instance for maintenance completes when the instance has reached the ‘maintenance’ state. mark degraded Marking an instance as degraded completes when the instance has reached the ‘degraded’ state from the ‘online’ state. milestone A milestone transition can occur in one of two directions. Either the transition moves from a lower milestone to a higher one, or from a higher one to a lower one. When moving to a higher milestone, the transition is considered complete when the instance representing that milestone reaches the ‘online’ state. The transition to a lower milestone, on the other hand, completes only when all instances which are part of higher milestones have reached the ‘disabled’ state. That’s not the whole story. svcadm(1M) will also try to determine if the actions initiated by a particular subcommand cannot complete. Trying to enable an instance which does not have its dependencies satisfied, for example, will cause svcadm(1M) to terminate before that instance reaches the ‘online’ state. You’ll also notice the optional ‘-T’ flag which can be used in conjunction with the ‘-s’ flag. This flag sets a timeout, in seconds, after which svcadm gives up on waiting for the subcommand to complete and terminates. This is useful in many cases, but in particular when the start method for an instance has an infinite timeout but might get stuck waiting for some resource that may never become available. For the C-oriented, each of these administrative actions has a corresponding function in libscf(3SCF), with names like smf_enable_instance_synchronous(3SCF) and smf_restart_instance_synchronous(3SCF).  Take a look at smf_enable_instance_synchronous(3SCF) for details.

    Read the article

  • Development Quirk From ASP.NET Dynamic Compilation

    - by jkauffman
    The Problem I got a compilation error in my ASP.NET MVC3 project that tested my sanity today. (As always, names are changed to protect the innocent) The type or namespace name 'FishViewModel' does not exist in the namespace 'Company.Product.Application.Models' (are you missing an assembly reference?) Sure looks easy! There must be something in the project referring to a FishViewModel. The Confusing Part The first thing I noticed was the that error was occuring in a folder clearly not in my project and in files that I definitely had not created: %SystemRoot%\Microsoft.NET\Framework\(versionNumber)\Temporary ASP.NET Files\ App_Web_mezpfjae.1.cs I also ascertained these facts, each of which made me more confused than the last: Rebuild and Clean had no effect. No controllers in the project ever returned a ViewResult using FishViewModel. No views in the project defined that they use FishViewModel. Searching across all files included in the project for “FishViewModel” provided no results. The build server did not report a problem. The Solution The problem stemmed from a file that was not included in the project but still present on the file system: (By the way, if you don’t know this trick already, there is a toolbar button in the Solution Explorer window to “Show All Files” which allows you to see files all files in the file system) In my situation, I was working on the mission-critical Fish view before abandoning the feature. Instead of deleting the file, I excluded it from the project. However, this was a bad move. It caused the build failure, and in order to fix the error, this file must be deleted. By the way, this file was not in source control, so the build server did not have it. This explains why my build server did not report a problem for me. The Explanation So, what’s going on? This file isn’t even a part of the project, so why is it failing the build? This is a behavior of the ASP.NET Dynamic Compilation. This is the same process that occurs when deploying a webpage; ASP.NET compiles the web application’s code. When this occurs on a production server, it has to do so without the .csproj file (which isn’t usually deployed, if you’ve taken your time to do a deployment cleanly). This process has merely the file system available to identify what to compile. So, back in the world of developing the webpage in visual studio on my developer box, I run into the situation because the same process is occuring there. This is true even though I have more files on my machine than will actually get deployed. I can’t help but think that this error could be attributed back to the real culprit file (Fish.cshtml, rather than the temporary files) with some work, but at least the error had enough information in it to narrow it down. The Conclusion I had previously been accustomed to the idea that for c# projects, the .csproj file always “defines” the build behavior. This investigation has taught me that I’ll need to shift my thinking a bit to remember that the file system has the final say when it comes to web applications, even on the developer’s machine!

    Read the article

  • How do you report out user research results?

    - by user12277104
    A couple weeks ago, one of my mentees asked to meet, because she wanted my advice on how to report out user research results. She had just conducted her first usability test for her new employer, and was getting to the point where she wanted to put together some slides, but she didn't want them to be boring. She wanted to talk with me about what to present and how best to present results to stakeholders. While I couldn't meet for another week, thanks to slideshare, I could quickly point her in the direction that my in-person advice would have led her. First, I'd put together a panel for the February 2012 New Hampshire UPA monthly meeting that we then repeated for the 2012 Boston UPA annual conference. In this panel, I described my reporting techniques, as did six of my colleagues -- two of whom work for companies smaller than mine, and four of whom are independent consultants. Before taking questions, we each presented for 3 to 5 minutes on how we presented research results. The differences were really interesting. For example, when do you really NEED a long, written report (as opposed to an email, spreadsheet, or slide deck with callouts)? When you are reporting your test results to the FDA -- that makes sense. in this presentation, I describe two modes of reporting results that I use.  Second, I'd been a participant in the CUE-9 study. CUE stands for Comparative Usability Evaluation, and this was the 9th of these studies that Rolf Molich had designed. Originally, the studies were designed to show the variability in evaluation methods practitioners use to evaluate websites and applications. Of course, using methods and tasks of their own choosing, the results were wildly different. However, in this 9th study, the tasks were the same, the participants were the same, and the problem severity scale was the same, so how would the results of the 19 practitioners compare? Still wildly variable. But for the purposes of this discussion, it gave me a work product that was not proprietary to the company I work for -- a usability test report that I could share publicly. This was the way I'd been reporting results since 2005, and pretty much what I still do, when time allows.  That said, I have been continuing to evolve my methods and reporting techniques, and sometimes, there is no time to create that kind of report -- the team can't wait the days that it takes to take screen shots, go through my notes, refer back to recordings, and write it all up. So in those cases, I use bullet points in email, talk through the findings with stakeholders in a 1-hour meeting, and then post the take-aways on a wiki page. There are other requirements for that kind of reporting to work -- for example, the stakeholders need to attend each of the sessions, and the sessions can't take more than a day to complete, but you get the idea: there is no one "right" way to report out results. If the method of reporting you are using is giving your stakeholders the information they need, in a time frame in which it is useful, and in a format that meets their needs (FDA report or bullet points on a wiki), then that's the "right" way to report your results. 

    Read the article

  • EPM troubleshooting Utilities

    - by THE
    (in via Maurice) "Are you keeping up-to-date with the latest troubleshooting utilities introduced from EPM 11.1.2.2? These are typically not described in product documentation, so you might miss references to them. The following five utilities may be run from the command line.(1) Deployment Report was introduced with EPM 11.1.2.2 (11 April 2012). It details logical web addresses, web servers, application ports, database connections, user directories, database repositories configured for the EPM system, data directories used by EPM system products, instance directories, FMW homes, deployment distory, et cetera. It also helps to keep you honest about whether you made changes to the system and at what times! Download Shared Services patch 13530721 to get the backported functionality in EPM 11.1.2.1. Run it from /Oracle/Middleware/user_projects/epmsystem1/bin/epmsys_registry.sh report deployment (on Unix/Linux)\Oracle\Middleware\user_projects\epmsystem1\bin\epmsys_registry.bat report deployment (on Microsoft Windows). The output is saved under \Oracle\Middleware\user_projects\epmsystem1\diagnostics\reports\deployment_report.html (2) Log Analysis has received more "press". It was released with EPM 11.1.2.3 and helps the user to slice and dice EPM logs. It has many parameters which are documented when run without parameters, when run with the -h parameter, or in the 'Readme' file. It has also been released as a standalone utility for EPM 11.1.2.3 and earlier versions. (Sign in to  My Oracle Support, click the 'Patches & Updates' tab, enter the patch number 17425397, and click the Search button. Download the appropriate platform-specific zip file, unzip, and read the 'Readme' file. Note that you must provide a proper value to a JAVA_HOME environment variable [pointer to the mother directory of the Java /bin subdirectory] in the loganalysis.bat | .sh file and use the -d parameter when running standalone.) Run it from /Oracle/Middleware/user_projects/epmsystem1/bin/loganalysis.sh -h (on Unix/Linux)\Oracle\Middleware\user_projects\epmsystem1\bin\loganalysis.bat -h (on Microsoft Windows). The output is saved under the \Oracle\Middleware\user_projects\epmsystem1\diagnostics\reports\ subdirectory. (3) The Registry Cleanup command may be used (without fear!) to clean up various corruptions which can  affect the Hyperion (database-based) Repository. Run it from /Oracle/Middleware/user_projects/epmsystem1/bin/registry-cleanup.sh (on Unix/Linux)\Oracle\Middleware\user_projects\epmsystem1\bin\registry-cleanup.bat (on Microsoft Windows). The actions are described on the command line. (4) The Remove Instance Command is only used if there are two or more instances configured on one computer and one of those should be deleted. Run it from /Oracle/Middleware/user_projects/epmsystem1/bin/remove-instance.sh (on Unix/Linux)\Oracle\Middleware\user_projects\epmsystem1\bin\remove-instance.bat (on Microsoft Windows). (5) The Reset Configuration Tool was introduced with EPM 11.1.2.2. It nullifies Shared Services Hyperion Registry settings so that a service may be reconfigured. You may locate the values to substitute for <product> or <task> by scanning registry.html (generated by running epmsys_registry.bat | .sh). Find productNAME in INSTANCE_TASKS_CONFIGURATION and SYSTEM_TASKS_CONFIGURATION nodes and identify tasks by property pairs that have values of 'Configurated' or 'Pending'. Run it from /Oracle/Middleware/user_projects/epmsystem1/bin/resetConfigTask.sh -product <product> -task <task> (on Unix/Linux)\Oracle\Middleware\user_projects\epmsystem1\bin\resetConfigTask.bat -product <product> -task <task> (on Microsoft Windows). "Thanks to Maurice for this collection of utilities.

    Read the article

  • World Record Siebel PSPP Benchmark on SPARC T4 Servers

    - by Brian
    Oracle's SPARC T4 servers set a new World Record for Oracle's Siebel Platform Sizing and Performance Program (PSPP) benchmark suite. The result used Oracle's Siebel Customer Relationship Management (CRM) Industry Applications Release 8.1.1.4 and Oracle Database 11g Release 2 running Oracle Solaris on three SPARC T4-2 and two SPARC T4-1 servers. The SPARC T4 servers running the Siebel PSPP 8.1.1.4 workload which includes Siebel Call Center and Order Management System demonstrates impressive throughput performance of the SPARC T4 processor by achieving 29,000 users. This is the first Siebel PSPP 8.1.1.4 benchmark supporting 29,000 concurrent users with a rate of 239,748 Business Transactions/hour. The benchmark demonstrates vertical and horizontal scalability of Siebel CRM Release 8.1.1.4 on SPARC T4 servers. Performance Landscape Systems Txn/hr Users Call Center Order Management Response Times (sec) 1 x SPARC T4-1 (1 x SPARC T4 2.85 GHz) – Web 3 x SPARC T4-2 (2 x SPARC T4 2.85 GHz) – App/Gateway 1 x SPARC T4-1 (1 x SPARC T4 2.85 GHz) – DB 239,748 29,000 0.165 0.925 Oracle: Call Center + Order Management Transactions: 197,128 + 42,620 Users: 20300 + 8700 Configuration Summary Web Server Configuration: 1 x SPARC T4-1 server 1 x SPARC T4 processor, 2.85 GHz 128 GB memory Oracle Solaris 10 8/11 iPlanet Web Server 7 Application Server Configuration: 3 x SPARC T4-2 servers, each with 2 x SPARC T4 processor, 2.85 GHz 256 GB memory 3 x 300 GB SAS internal disks Oracle Solaris 10 8/11 Siebel CRM 8.1.1.5 SIA Database Server Configuration: 1 x SPARC T4-1 server 1 x SPARC T4 processor, 2.85 GHz 128 GB memory Oracle Solaris 11 11/11 Oracle Database 11g Release 2 (11.2.0.2) Storage Configuration: 1 x Sun Storage F5100 Flash Array 80 x 24 GB flash modules Benchmark Description Siebel 8.1 PSPP benchmark includes Call Center and Order Management: Siebel Financial Services Call Center – Provides the most complete solution for sales and service, allowing customer service and telesales representatives to provide superior customer support, improve customer loyalty, and increase revenues through cross-selling and up-selling. High-level description of the use cases tested: Incoming Call Creates Opportunity, Quote and Order and Incoming Call Creates Service Request . Three complex business transactions are executed simultaneously for specific number of concurrent users. The ratios of these 3 scenarios were 30%, 40%, 30% respectively, which together were totaling 70% of all transactions simulated in this benchmark. Between each user operation and the next one, the think time averaged approximately 10, 13, and 35 seconds respectively. Siebel Order Management – Oracle's Siebel Order Management allows employees such as salespeople and call center agents to create and manage quotes and orders through their entire life cycle. Siebel Order Management can be tightly integrated with back-office applications allowing users to perform tasks such as checking credit, confirming availability, and monitoring the fulfillment process. High-level description of the use cases tested: Order & Order Items Creation and Order Updates. Two complex Order Management transactions were executed simultaneously for specific number of concurrent users concurrently with aforementioned three Call Center scenarios above. The ratio of these 2 scenarios was 50% each, which together were totaling 30% of all transactions simulated in this benchmark. Between each user operation and the next one, the think time averaged approximately 20 and 67 seconds respectively. Key Points and Best Practices No processor cores or cache were activated or deactivated on the SPARC T-Series systems to achieve special benchmark effects. See Also Siebel White Papers SPARC T4-1 Server oracle.com OTN SPARC T4-2 Server oracle.com OTN Siebel CRM oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN Disclosure Statement Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 30 September 2012.

    Read the article

  • SJS AS 9.1 U2 (GF v2 U2) - Patch 25 // GF v2.1 - Patch 19 // Sun GlassFish Enterprise Server v2.1.1 Patch 13

    - by arungupta
    SJS AS 9.1 U2 (GF v2 U2) patch 25 is a commercial (Restricted) patch (see Overview of GFv2) available as part of Oracle's Commercial Support for GlassFish. This release is also patch 19 of GlassFish 2.1 and patch 13 of GlassFish 2.1.1. The file-based patches were released onSep 1, 2011; package-based patches were released on Sep 13, 2011. Release Overview Description SJS AS 9.1 U2 (GFv2 U2) - Patch 25 - File and Package-Based Patch for Solaris SPARC, Solaris x86, Linux, Windows and AIX. GlassFish 2.1 - Patch 19 - File and Package-Based Patch for Solaris SPARC, Solaris x86, Linux, Windows and AIX. GlassFish 2.1.1 - Patch 13 - File and Package-Based Patch for Solaris SPARC, Solaris x86, Linux, Windows and AIX. Patch Ids This release comes in 3 different variants: Package-based patches with HADB • Solaris SPARC - [128640-27] • Solarix i586 - [128641-27] • Linux RPM - [128642-27] File-based patches with HADB • Solaris SPARC - [128643-27] • Solaris i586 - [128644-27] • Linux - [128645-27] • Windows - [128646-27] File based patches without HADB • Solaris SPARC - [128647-27] • Solaris i586 - [128648-27] • Linux - [128649-27] • Windows - [128650-27] • AIX - [137916-27] Update Date Nov 23, 2011 Comment Commercial (for-fee) release with regular bug fixes. This is patch 25 for SJS AS 9.1 U2; it is also patch 19 for GlassFish v2.1 and patch 13 for GlassFish v2.1.1. It contains the fixes from the previous patches plus fixes for 18 unique defects. Status CURRENT Bugs Fixed in this Patch: • [12823919]: RESPONSE BYTECHUNK FLUSH WILL GENERATE A MIMEHEADER WHEN SESSION REPLICATION ON • [12818767]: INTEGRATE NEW GRIZZLY 1.0.40 • [12807660]: BUILD, STAGE AND INTEGRATING HADB • [12807643]: INTEGRATE MQ 4.4 U2 P4 • [12802648]: GLASSFISH BUILD FAILED DUE TO METRO INTEGRATION • [12799002]: JNDI RESOURCE NOT ENABLED IF TARGETTING USING ADMIN GUI ON GF 2.1.1 PATCH 11 • [12794672]: ORG.APACHE.JASPER.RUNTIME.BODYCONTENTIMPL DOES NOT COMPACT CB BUFFER • [12772029]: BUG 12308270 - NEED HOTFIX FROM GF RUNNING OPENSSO • [12749346]: VERSION CHANGES FOR GLASSFISH V2.1.1 PATCH 13 • [12749151]: INTEGRATING METRO 1.6.1-B01 INTO GF 2.1.1 P13 • [12719221]: PORTUNIFICATION WSTCPPROTOCOLFINDER.FIND NULLPOINTEREXCEPTION THROWN • [12695620]: HADB: LOGBUFFERSIZE CALCULATED INCORRECTLY FOR VALUES 120 MB AND THE MEMORY FO • [12687345]: ENVIRONMENT VARIABLE PARSING FOR SUN_APPSVR_NOBACKUP CAN FAIL DEPENDING ENV VARS • [12547651]: GLASSFISH DISPLAY BUG • [12359965]: GEREQUESTURI RETURNS URI WITH NULL PREPENDED INTERMITTENT AFTER UPGRADE • [12308270]: SUNBT7020210 ENHANCE JAXRPC SOAP RESPONSE USE PREVIOUS CONFIGURED NAMESPACE PREF • [12308003]: SUNBT7018895 FAILURE TO DEPLOY OR RUN WEBSERVICE AFTER UPDATING TO GF 2.1.1 P07 • [12246256]: SUNBT6739013 [RN]GLASSFISH/SUN APPLICATION INSTALLER CRASHES ON LINUX Additional Notes: More details about these bugs can be found at My Oracle Support.

    Read the article

  • Oracle Solaris Crash Analysis Tool 5.3 now available

    - by user12609056
    Oracle Solaris Crash Analysis Tool 5.3 The Oracle Solaris Crash Analysis Tool Team is happy to announce the availability of release 5.3.  This release addresses bugs discovered since the release of 5.2 plus enhancements to support Oracle Solaris 11 and updates to Oracle Solaris versions 7 through 10. The packages are available on My Oracle Support - simply search for Patch 13365310 to find the downloadable packages. Release Notes General blast support The blast GUI has been removed and is no longer supported. Oracle Solaris 2.6 Support As of Oracle Solaris Crash Analysis Tool 5.3, support for Oracle Solaris 2.6 has been dropped. If you have systems running Solaris 2.6, you will need to use Oracle Solaris Crash Analysis Tool 5.2 or earlier to read its crash dumps. New Commands Sanity Command Though one can re-run the sanity checks that are run at tool start-up using the coreinfo command, many users were unaware that they were. Though these checks can still be run using that command, a new command, namely sanity, can now be used to re-run the checks at any time. Interface Changes scat_explore -r and -t option The -r option has ben added to scat_explore so that a base directory can be specified and the -t op[tion was added to enable color taggging of the output. The scat_explore sub-command now accepts new options. Usage is: scat --scat_explore [-atv] [-r base_dir] [-d dest] [unix.N] [vmcore.]N Where: -v Verbose Mode: The command will print messages highlighting what it's doing. -a Auto Mode: The command does not prompt for input from the user as it runs. -d dest Instructs scat_explore to save it's output in the directory dest instead of the present working directory. -r base_dir Instructs scat_explore to save it's under the directory base_dir instead of the present working directory. If it is not specified using the -d option, scat_explore names it's output file as "scat_explore_system_name_hostid_lbolt_value_corefile_name." -t Enable color tags. When enabled, scat_explore tags important text with colors that match the level of importance. These colors correspond to the color normally printed when running Oracle Solaris Crash Analysis Tool in interactive mode. Tag Name Definition FATAL An extremely important message which should be investigated. WARNING A warning that may or may not have anything to do with the crash. ERROR An error, usually printer with a suggested command ALERT Used to indicate something the tool discovered. INFO Purely informational message INFO2 A follow-up to an INFO tagged message REDZONE Usually used when prnting memory info showing something is in the kernel's REDZONE. N The number of the crash dump. Specifying unix.N vmcore.N is optional and not required. Example: $ scat --scat_explore -a -v -r /tmp vmcore.0 #Output directory: /tmp/scat_explore_oomph_833a2959_0x28800_vmcore.0 #Tar filename: scat_explore_oomph_833a2959_0x28800_vmcore.0.tar #Extracting crash data... #Gathering standard crash data collections... #Panic string indicates a possible hang... #Gathering Hang Related data... #Creating tar file... #Compressing tar file... #Successful extraction SCAT_EXPLORE_DATA_DIR=/tmp/scat_explore_oomph_833a2959_0x28800_vmcore.0 Sending scat_explore results The .tar.gz file that results from a scat_explore run may be sent using Oracle Secure File Transfer. The Oracle Secure File Transfer User Guide describes how to use it to send a file. The send_scat_explore script now has a -t option for specifying a to address for sending the results. This option is mandatory. Known Issues There are a couple known issues that we are addressing in release 5.4, which you should expect to see soon: Display of timestamps in threads and clock information is incorrect in some cases. There are alignment issues with some of the tables produced by the tool.

    Read the article

  • Where to store front-end data for "object calculator"

    - by Justin Grahn
    I recently have completed a language library that acts as a giant filter for food items, and flows a bit like this :Products -> Recipes -> MenuItems -> Meals and finally, upon submission, creates an Order. I have also completed a database structure that stores all the pertinent information to each class, and seems to fit my needs. The issue I'm having is linking the two. I imagined all of the information being local to each instance of the product, where there exists one backend user who edits and manipulates data, and multiple front end users who select their Meal(s) to create an Order. Ideally, all of the front end users would have all of this information stored locally within the library, and would update the library on startup from a database. How should I go about storing the data so that I can load it into the library every time the user opens the application? Do I package a database onboard and just load and populate every time? The only method I can currently conceive of doing this, even if I only have 500 possible Product objects, would require me to foreach the list for every Product that I need to match to a Recipe and so on and so forth every time I relaunch the program, which seems like a lot of wasteful loading. Here is a general flow of my architecture: Products: public class Product : IPortionable { public Product(string n, uint pNumber = 0) { name = n; productNumber = pNumber; } public string name { get; set; } public uint productNumber { get; set; } } Recipes: public Recipe(string n, decimal yieldAmt, Volume.Unit unit) { name = n; yield = new Volume(yieldAmt, unit); yield.ConvertUnit(); } /// <summary> /// Creates a new ingredient object /// </summary> /// <param name="n">Name</param> /// <param name="yieldAmt">Recipe Yield</param> /// <param name="unit">Unit of Yield</param> public Recipe(string n, decimal yieldAmt, Weight.Unit unit) { name = n; yield = new Weight(yieldAmt, unit); } public Recipe(Recipe r) { name = r.name; yield = r.yield; ingredients = r.ingredients; } public string name { get; set; } public IMeasure yield; public Dictionary<IPortionable, IMeasure> ingredients = new Dictionary<IPortionable,IMeasure>(); MenuItems: public abstract class MenuItem : IScalable { public static string title = null; public string name { get; set; } public decimal maxPortionSize { get; set; } public decimal minPortionSize { get; set; } public Dictionary<IPortionable, IMeasure> ingredients = new Dictionary<IPortionable, IMeasure>(); and Meal: public class Meal { public Meal(int guests) { guestCount = guests; } public int guestCount { get; private set; } //TODO: Make a new MainCourse class that holds pasta and Entree public Dictionary<string, int> counts = new Dictionary<string, int>(){ {MainCourse.title, 0}, {Side.title , 0}, {Appetizer.title, 0} }; public List<MenuItem> items = new List<MenuItem>(); The Database just stores and links each of these basic names and amounts together usings ID's (RecipeID, ProductID and MenuItemID)

    Read the article

  • How to translate along Z axis in OpenTK

    - by JeremyJAlpha
    I am playing around with an OpenGL sample application I downloaded for Xamarin-Android. The sample application produces a rotating colored cube I would simply like to edit it so that the rotating cube is translated along the Z axis and disappears into the distance. I modified the code by: adding an cumulative variable to store my Z distance, adding GL.Enable(All.DepthBufferBit) - unsure if I put it in the right place, adding GL.Translate(0.0f, 0.0f, Depth) - before the rotate functions, Result: cube rotates a couple of times then disappears, it seems to be getting clipped out of the frustum. So my question is what is the correct way to use and initialize the Z buffer and get the cube to travel along the Z axis? I am sure I am missing some function calls but am unsure of what they are and where to put them. I apologise in advance as this is very basic stuff but am still learning :P, I would appreciate it if anyone could show me the best way to get the cube to still rotate but to also move along the Z axis. I have commented all my modifications in the code: // This gets called when the drawing surface is ready protected override void OnLoad (EventArgs e) { // this call is optional, and meant to raise delegates // in case any are registered base.OnLoad (e); // UpdateFrame and RenderFrame are called // by the render loop. This is takes effect // when we use 'Run ()', like below UpdateFrame += delegate (object sender, FrameEventArgs args) { // Rotate at a constant speed for (int i = 0; i < 3; i ++) rot [i] += (float) (rateOfRotationPS [i] * args.Time); }; RenderFrame += delegate { RenderCube (); }; GL.Enable(All.DepthBufferBit); //Added by Noob GL.Enable(All.CullFace); GL.ShadeModel(All.Smooth); GL.Hint(All.PerspectiveCorrectionHint, All.Nicest); // Run the render loop Run (30); } void RenderCube () { GL.Viewport(0, 0, viewportWidth, viewportHeight); GL.MatrixMode (All.Projection); GL.LoadIdentity (); if ( viewportWidth > viewportHeight ) { GL.Ortho(-1.5f, 1.5f, 1.0f, -1.0f, -1.0f, 1.0f); } else { GL.Ortho(-1.0f, 1.0f, -1.5f, 1.5f, -1.0f, 1.0f); } GL.MatrixMode (All.Modelview); GL.LoadIdentity (); Depth -= 0.02f; //Added by Noob GL.Translate(0.0f,0.0f,Depth); //Added by Noob GL.Rotate (rot[0], 1.0f, 0.0f, 0.0f); GL.Rotate (rot[1], 0.0f, 1.0f, 0.0f); GL.Rotate (rot[2], 0.0f, 1.0f, 0.0f); GL.ClearColor (0, 0, 0, 1.0f); GL.Clear (ClearBufferMask.ColorBufferBit); GL.VertexPointer(3, All.Float, 0, cube); GL.EnableClientState (All.VertexArray); GL.ColorPointer (4, All.Float, 0, cubeColors); GL.EnableClientState (All.ColorArray); GL.DrawElements(All.Triangles, 36, All.UnsignedByte, triangles); SwapBuffers (); }

    Read the article

  • PowerShell Script To Find Where SharePoint 2010 Features Are Activated

    - by Brian Jackett
    The script on this post will find where features are activated within your SharePoint 2010 farm.   Problem    Over the past few months I’ve gotten literally dozens of emails, blog comments, or personal requests from people asking “how do I find where a SharePoint feature has been activated?”  I wrote a script to find which features are installed on your farm almost 3 years ago.  There is also the Get-SPFeature PowerShell commandlet in SharePoint 2010.  The problem is that these only tell you if a feature is installed not where they have been activated.  This is especially important to know if you have multiple web applications, site collections, and /or sites.   Solution    The default call (no parameters) for Get-SPFeature will return all features in the farm.  Many of the parameter sets accept filters for specific scopes such as web application, site collection, and site.  If those are supplied then only the enabled / activated features are returned for that filtered scope.  Taking the concept of recursively traversing a SharePoint farm and merging that with calls to Get-SPFeature at all levels of the farm you can find out what features are activated at that level.  Store the results into a variable and you end up with all features that are activated at every level.    Below is the script I came up with (slight edits for posting on blog).  With no parameters the function lists all features activated at all scopes.  If you provide an Identity parameter you will find where a specific feature is activated.  Note that the display name for a feature you see in the SharePoint UI rarely matches the “internal” display name.  I would recommend using the feature id instead.  You can download a full copy of the script by clicking on the link below.    Note: This script is not optimized for medium to large farms.  In my testing it took 1-3 minutes to recurse through my demo environment.  This script is provided as-is with no warranty.  Run this in a smaller dev / test environment first.   001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050 051 052 053 054 055 056 057 058 059 060 061 062 063 064 065 066 067 068 function Get-SPFeatureActivated { # see full script for help info, removed for formatting [CmdletBinding()] param(   [Parameter(position = 1, valueFromPipeline=$true)]   [Microsoft.SharePoint.PowerShell.SPFeatureDefinitionPipeBind]   $Identity )#end param   Begin   {     # declare empty array to hold results. Will add custom member `     # for Url to show where activated at on objects returned from Get-SPFeature.     $results = @()         $params = @{}   }   Process   {     if([string]::IsNullOrEmpty($Identity) -eq $false)     {       $params = @{Identity = $Identity             ErrorAction = "SilentlyContinue"       }     }       # check farm features     $results += (Get-SPFeature -Farm -Limit All @params |              % {Add-Member -InputObject $_ -MemberType noteproperty `                 -Name Url -Value ([string]::Empty) -PassThru} |              Select-Object -Property Scope, DisplayName, Id, Url)     # check web application features     foreach($webApp in (Get-SPWebApplication))     {       $results += (Get-SPFeature -WebApplication $webApp -Limit All @params |                % {Add-Member -InputObject $_ -MemberType noteproperty `                   -Name Url -Value $webApp.Url -PassThru} |                Select-Object -Property Scope, DisplayName, Id, Url)       # check site collection features in current web app       foreach($site in ($webApp.Sites))       {         $results += (Get-SPFeature -Site $site -Limit All @params |                  % {Add-Member -InputObject $_ -MemberType noteproperty `                     -Name Url -Value $site.Url -PassThru} |                  Select-Object -Property Scope, DisplayName, Id, Url)                          $site.Dispose()         # check site features in current site collection         foreach($web in ($site.AllWebs))         {           $results += (Get-SPFeature -Web $web -Limit All @params |                    % {Add-Member -InputObject $_ -MemberType noteproperty `                       -Name Url -Value $web.Url -PassThru} |                    Select-Object -Property Scope, DisplayName, Id, Url)           $web.Dispose()         }       }     }   }   End   {     $results   } } #end Get-SPFeatureActivated   Snippet of output from Get-SPFeatureActivated   Conclusion    This script has been requested for a long time and I’m glad to finally getting a working “clean” version.  If you find any bugs or issues with the script please let me know.  I’ll be posting this to the TechNet Script Center after some internal review.  Enjoy the script and I hope it helps with your admin / developer needs.         -Frog Out

    Read the article

  • How to get faster graphics in KVM? VNC is painfully slow with Haiku OS guest, Spice won't install and SDL doesn't work

    - by Don Quixote
    I've been coming up to speed on the Haiku operating system, an Open Source clone of BeOS 5 Pro. I'm using an Apple MacBook Pro as my development machine. Apple's BootCamp BIOS does not support more than four partitions on the internal hard drive. While I can set up extended and logical partitions, doing so will prevent any of the installed operating systems from booting. To run Haiku directly on the iron, I boot it off a USB stick. Using external storage is also helpful because I am perpetually out of filesystem space. While VirtualBox is documented to allow access to physical drives, I could not actually get it to work. Also VirtualBox can only use one of the host CPU's cores. While VB guests can be configured for more than one CPU, they are only emulated. A full build of the Haiku OS takes 4.5 under VB. I had the hope of reducing build times by using KVM instead, but it's not working nearly as well as VirtualBox did. The Linux Kernel Virtual Machine is broken in all manner of fundamental ways as seen from Haiku. But I'm a coder; maybe I could contribute to fixing some of those problems. The first problem I've got is that Haiku's video in virt-manager is quite painfully slow. When I drag Haiku windows around the desktop, they lag quite far behind where my mouse is. It's quite difficult to move a window to a precise position on the screen. Just imagine that the mouse was connected to the window title bar with a really stretchy spring. Also Haiku's mouse lags quite far behind where I have moved it. I found lots of Personal Package Archives that enable Spice from QEMU / KVM at the Ubuntu Personal Package Arhives. I tried a few of the PPAs but none of them worked; with one of them, the command "add-apt-repository" crashed with a traceback. There is a Wiki page about Spice, but it says that it only works on 64-bit. My Early 2006 MacBook Pro is 32-bit. Its Apple Model Identifier is MacBookPro1,1; these use Core Duos NOT Core 2 Duos. I don't mind building a source deb for 32-bit if I can expect it to work. Is there some reason that Spice should be 64-bit only? Does it need features of the x86_64 Instruction Set Architecture that x86 does not have? When I try using SDL from virt-manager, the configuration for Local SDL Window says "Xauth: /home/mike/.Xauthority". When I try to start my guest, virt-manager emits an error. When I Googled the error message, the usual solution was to make ~/.Xauthority readible. However, .Xauthorty does not exist in my home directory. Instead I have a $XAUTHORITY environment variable. There is no way to configure SDL in virt-manager to use $XAUTHORITY instead of ~/.Xauthority. Neither does it work to copy the value of $XAUTHORITY into the file. I am ready to scream, because I've been five fscking days trying to make KVM work for Haiku development. There is a whole lot more that is broken than the slow video. All I really want to do for now is speed up my full builds of Haiku by using "jam -j2" to use both cores in my CPU. I may try Xen next, but the last time I monkeyed with Xen it was far, far more broken than I am finding KVM to be. Just for now, I would be satisfied if there were some way to use my USB stick as a drive in VirtualBox. VB does allow me to configure /dev/sdb as a drive, but it always causes a fatal error when I try to launch the guest. Thank You For Any Advice You Can Give Me. -

    Read the article

  • Where Have All the Ugly Forms Gone? Users and ADF Took Care Of It

    - by ultan o'broin
    Sometimes I hear that our application demos are a bit too "cutsey" and that we never talk about with any user roles that have lots of data entry as a requirement. Some (no names) consider those old clunker forms, with the myriad rows of fields, to be super-productive for data clerks. We do have such roles covered in Oracle Fusion Applications for sure. But consider what is really the issue here: productivity. Check out how the Oracle Fusion Financials Applications User Experience team went about designing for productivity when receiving and entering invoice data, for example. See how Fusion Financials caters so well for input and control of data? Central to all this is knowing the users and how they work: what tasks do they need to perform, and when. Read more about Fusion Financials productivity in the white paper, Get It Done Fast, Get It Done Right: The Oracle Fusion Financials User Experience. Now and then, I see forms that weren't designed for end user activity at all. Instead, they were designed by developers or by the IT department around the database schema. Forms with literally dozens of fields on the same page, sometimes. Forms that give the impression there was only task involved, when there may have been several. At times, completing one of these huge forms accurately became so tedious that, under pressure, it made more sense for the user to complete it quickly as possible and then let somebody else check it for accuracy and fill in the gaps from data emailed along in spreadsheet form. Data accuracy is critical in our business. Not good. Not efficient. Not productive. So here are a few basics on forms design for data entry-type user roles. A great place for developers to start exploring what is possible with forms layout is the Rich Client User Experience (RCUX) guidance on Form Layout, using ADF components. User-Centered Forms Design Considerations The starting point--something you must always keep in mind with your own design--is design for the end user. Find a representative end user, and keep that user engaged throughout the design, deployment, and test process. Consider these points in user testing those forms: Are there automated or technical solutions to entering the data that avoid manual input in the first place? For example, imports, uploads, OCR, whatever. Some day we will be able to tell Siri to do it, but leave that for now. Design your form to reflect the task involved (i.e., the business process) and not the database schema. On the form, group like fields together, logically. Eliminate duplicate data entry or prepopulate from previous data entry. Allow users to complete fields in the order they wish (i.e., no interdependency). Allow for tabbing between fields (keyboard is faster than mouse), so know how the browser supports this (see that RCUX guideline). Allow for final validation at the page level not at field-level entry. Way better for heads-down users. For example, ADF messages allow you to see a list of all validation errors on a page on a final submit or navigation action and to easily navigate to the point of error. Better still, be error tolerant. Allow users to enter data in formats they comfortable with. Bind any relevant user preference setting to the input format allowed (for example, the locale date format). Explore what data entry conversion can do for you automatically too (see the ADF converter demos, convenience patterns can also be written). Only ask for data input when it's needed. Get rid of, or hide optional fields. Cut down on the number of mandatory fields, and mark them clearly (use a *). Clearly label the fields in plain language. I am sure you may have a few more tips on forms design for data entry users. Remember the user before finding the comments.

    Read the article

  • Calculating for leap year [migrated]

    - by Bradley Bauer
    I've written this program using Java in Eclipse. I was able to utilize a formula I found that I explained in the commented out section. Using the for loop I can iterate through each month of the year, which I feel good about in that code, it seems clean and smooth to me. Maybe I could give the variables full names to make everything more readable but I'm just using the formula in its basic essence :) Well my problem is it doesn't calculate correctly for years like 2008... Leap Years. I know that if (year % 400 == 0 || (year % 4 == 0 && year % 100 != 0)) then we have a leap year. Maybe if the year is a leap year I need to subtract a certain amount of days from a certain month. Any solutions, or some direction would be great thanks :) package exercises; public class E28 { /* * Display the first days of each month * Enter the year * Enter first day of the year * * h = (q + (26 * (m + 1)) / 10 + k + k/4 + j/4 + 5j) % 7 * * h is the day of the week (0: Saturday, 1: Sunday ......) * q is the day of the month * m is the month (3: March 4: April.... January and Feburary are 13 and 14) * j is the century (year / 100) * k is the year of the century (year %100) * */ public static void main(String[] args) { java.util.Scanner input = new java.util.Scanner(System.in); System.out.print("Enter the year: "); int year = input.nextInt(); int j = year / 100; // Find century for formula int k = year % 100; // Find year of century for formula // Loop iterates 12 times. Guess why. for (int i = 1, m = i; i <= 12; i++) { // Make m = i. So loop processes formula once for each month if (m == 1 || m == 2) m += 12; // Formula requires that Jan and Feb are represented as 13 and 14 else m = i; // if not jan or feb, then set m to i int h = (1 + (26 * (m + 1)) / 10 + k + k/4 + j/4 + 5 * j) % 7; // Formula created by a really smart man somewhere // I let the control variable i steer the direction of the formual's m value String day; if (h == 0) day = "Saturday"; else if (h == 1) day = "Sunday"; else if (h == 2) day = "Monday"; else if (h == 3) day = "Tuesday"; else if (h == 4) day = "Wednesday"; else if (h == 5) day = "Thursday"; else day = "Friday"; switch (m) { case 13: System.out.println("January 1, " + year + " is " + day); break; case 14: System.out.println("Feburary 1, " + year + " is " + day); break; case 3: System.out.println("March 1, " + year + " is " + day); break; case 4: System.out.println("April 1, " + year + " is " + day); break; case 5: System.out.println("May 1, " + year + " is " + day); break; case 6: System.out.println("June 1, " + year + " is " + day); break; case 7: System.out.println("July 1, " + year + " is " + day); break; case 8: System.out.println("August 1, " + year + " is " + day); break; case 9: System.out.println("September 1, " + year + " is " + day); break; case 10: System.out.println("October 1, " + year + " is " + day); break; case 11: System.out.println("November 1, " + year + " is " + day); break; case 12: System.out.println("December 1, " + year + " is " + day); break; } } } }

    Read the article

  • Fetching Partition Information

    - by Mike Femenella
    For a recent SSIS package at work I needed to determine the distinct values in a partition, the number of rows in each partition and the file group name on which each partition resided in order to come up with a grouping mechanism. Of course sys.partitions comes to mind for some of that but there are a few other tables you need to link to in order to grab the information required. The table I’m working on contains 8.8 billion rows. Finding the distinct partition keys from this table was not a fast operation. My original solution was to create  a temporary table, grab the distinct values for the partitioned column, then update via sys.partitions for the rows and the $partition function for the partitionid and finally look back to the sys.filegroups table for the filegroup names. It wasn’t pretty, it could take up to 15 minutes to return the results. The primary issue is pulling distinct values from the table. Queries for distinct against 8.8 billion rows don’t go quickly. A few beers into a conversation with a friend and we ended up talking about work which led to a conversation about the task described above. The solution was already built in SQL Server, just needed to pull it together. The first table I needed was sys.partition_range_values. This contains one row for each range boundary value for a partition function. In my case I have a partition function which uses dayid values. For example July 4th would be represented as an int, 20130704. This table lists out all of the dayid values which were defined in the function. This eliminated the need to query my source table for distinct dayid values, everything I needed was already built in here for me. The only caveat was that in my SSIS package I needed to create a bucket for any dayid values that were out of bounds for my function. For example if my function handled 20130501 through 20130704 and I had day values of 20130401 or 20130705 in my table, these would not be listed in sys.partition_range_values. I just created an “everything else” bucket in my ssis package just in case I had any dayid values unaccounted for. To get the number of rows for a partition is very easy. The sys.partitions table contains values for each partition. Easy enough to achieve by querying for the object_id and index value of 1 (the clustered index) The final piece of information was the filegroup name. There are 2 options available to get the filegroup name, sys.data_spaces or sys.filegroups. For my query I chose sys.filegroups but really it’s a matter of preference and data needs. In order to bridge between sys.partitions table and either sys.data_spaces or sys.filegroups you need to get the container_id. This can be done by joining sys.allocation_units.container_id to the sys.partitions.hobt_id. sys.allocation_units contains the field data_space_id which then lets you join in either sys.data_spaces or sys.file_groups. The end result is the query below, which typically executes for me in under 1 second. I’ve included the join to sys.filegroups and to sys.dataspaces, and I’ve  just commented out the join sys.filegroups. As I mentioned above, this shaves a good 10-15 minutes off of my original ssis package and is a really easy tweak to get a boost in my ETL time. Enjoy.

    Read the article

  • Parsing T-SQL – The easy way

    - by Dave Ballantyne
    Every once in a while, I hit an issue that would require me to interrogate/parse some T-SQL code.  Normally, I would shy away from this and attempt to solve the problem in some other way.  I have written parsers before in the the past using LEX and YACC, and as much fun and awesomeness that path is,  I couldnt justify the time it would take. However, this week I have been faced with just such an issue and at the back of my mind I can remember reading through the SQLServer 2012 feature pack and seeing something called “Microsoft SQL Server 2012 Transact-SQL Language Service “.  This is described there as : “The SQL Server Transact-SQL Language Service is a component based on the .NET Framework which provides parsing validation and IntelliSense services for Transact-SQL for SQL Server 2012, SQL Server 2008 R2, and SQL Server 2008. “ Sounds just what I was after.  Documentation is very scant on this so dont take what follows as best practice or best use, just a practice and a use. Knowing what I was sort of looking for something, I found the relevant assembly in the gac which is the simply named ,’Microsoft.SqlServer.Management.SqlParser’. Even knowing that you wont find much in terms of documentation if you do a web-search, but you will find the MSDN documentation that list the members and methods etc… The “scanner”  class sounded the most appropriate for my needs as that is described as “Scans Transact-SQL searching for individual units of code or tokens.”. After a bit of poking, around the code i ended up with was something like [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.Management.SqlParser") | Out-Null $ParseOptions = New-Object Microsoft.SqlServer.Management.SqlParser.Parser.ParseOptions $ParseOptions.BatchSeparator = 'GO' $Parser = new-object Microsoft.SqlServer.Management.SqlParser.Parser.Scanner($ParseOptions) $Sql = "Create Procedure MyProc as Select top(10) * from dbo.Table" $Parser.SetSource($Sql,0) $Token=[Microsoft.SqlServer.Management.SqlParser.Parser.Tokens]::TOKEN_SET $Start =0 $End = 0 $State =0 $IsEndOfBatch = $false $IsMatched = $false $IsExecAutoParamHelp = $false while(($Token = $Parser.GetNext([ref]$State ,[ref]$Start, [ref]$End, [ref]$IsMatched, [ref]$IsExecAutoParamHelp ))-ne [Microsoft.SqlServer.Management.SqlParser.Parser.Tokens]::EOF) { try{ ($TokenPrs =[Microsoft.SqlServer.Management.SqlParser.Parser.Tokens]$Token) | Out-Null $TokenPrs $Sql.Substring($Start,($end-$Start)+1) }catch{ $TokenPrs = $null } } As you can see , the $Sql variable holds the sql to be parsed , that is pushed into the $Parser object using SetSource,  and then we will use GetNext until the EOF token is returned.  GetNext will also return the Start and End character positions within the source string of the parsed text. This script’s output is : TOKEN_CREATE Create TOKEN_PROCEDURE Procedure TOKEN_ID MyProc TOKEN_AS as TOKEN_SELECT Select TOKEN_TOP top TOKEN_INTEGER 10 TOKEN_FROM from TOKEN_ID dbo TOKEN_TABLE Table note that the ‘(‘, ‘)’  and ‘*’ characters have returned a token type that is not present in the Microsoft.SqlServer.Management.SqlParser.Parser.Tokens Enum that has caused an error which has been caught in the catch block.  Fun, Fun ,Fun , Simple T-SQL Parsing.  Hope this helps someone in the same position,  let me know how you get on.

    Read the article

  • Detect multitouch (two fingers touch) on a sprite to apply pinch zoom behaviour

    - by Tahreem
    I am using andengine, want to move, zoom and rotate multiple sprites individually on a scene. I have achieved "move" but for pinch zoom i am unable to get the event to two fingers' touch. Below is the code: public class Main extends SimpleBaseGameActivity { private Camera camera; private BitmapTextureAtlas mBitmapTextureAtlas; private ITextureRegion mFaceTextureRegion; private ITextureRegion mFaceTextureRegion2; Sprite face2; private static final int CAMERA_WIDTH = 800; private static final int CAMERA_HEIGHT = 480; @Override public EngineOptions onCreateEngineOptions() { camera = new Camera(0, 0, CAMERA_WIDTH, CAMERA_HEIGHT); EngineOptions engineOptions = new EngineOptions(true, ScreenOrientation.LANDSCAPE_FIXED, new RatioResolutionPolicy( CAMERA_WIDTH, CAMERA_HEIGHT), camera); return engineOptions; } @Override protected void onCreateResources() { BitmapTextureAtlasTextureRegionFactory.setAssetBasePath("gfx/"); this.mBitmapTextureAtlas = new BitmapTextureAtlas( this.getTextureManager(), 1024, 1600, TextureOptions.NEAREST); BitmapTextureAtlasTextureRegionFactory.createTiledFromAsset(this.mBitmapTextureAtlas, // this, "ui_ball_1.png", 0, 0, 1, 1), // this.getVertexBufferObjectManager()); this.mFaceTextureRegion = BitmapTextureAtlasTextureRegionFactory .createFromAsset(this.mBitmapTextureAtlas, this, "ui_ball_1.png", 0, 0); this.mFaceTextureRegion2 = BitmapTextureAtlasTextureRegionFactory .createFromAsset(this.mBitmapTextureAtlas, this, "ui_ball_1.png", 0, 0); this.mBitmapTextureAtlas.load(); this.mEngine.getTextureManager().loadTexture(this.mBitmapTextureAtlas); } @Override protected Scene onCreateScene() { this.mEngine.registerUpdateHandler(new FPSLogger()); final Scene scene = new Scene(); scene.setBackground(new Background(0.09804f, 0.6274f, 0.8784f)); final float centerX = (CAMERA_WIDTH - this.mFaceTextureRegion .getWidth()) / 2; final float centerY = (CAMERA_HEIGHT - this.mFaceTextureRegion .getHeight()) / 2; final Sprite face = new Sprite(centerX, centerY, this.mFaceTextureRegion, this.getVertexBufferObjectManager()) { @Override public boolean onAreaTouched(final TouchEvent pSceneTouchEvent, final float pTouchAreaLocalX, final float pTouchAreaLocalY) { this.setPosition(pSceneTouchEvent.getX() - this.getWidth() / 2, pSceneTouchEvent.getY() - this.getHeight() / 2); return true; } }; face.setScale(2); scene.attachChild(face); scene.registerTouchArea(face); face2 = new Sprite(200, 200, this.mFaceTextureRegion2, this.getVertexBufferObjectManager()) { @Override public boolean onAreaTouched(final TouchEvent pSceneTouchEvent, final float pTouchAreaLocalX, final float pTouchAreaLocalY) { switch(pSceneTouchEvent.getAction()){ case TouchEvent.ACTION_DOWN: int count = pSceneTouchEvent.getMotionEvent().getPointerCount() ; for(int i= 0; i <count; i++) { int id = pSceneTouchEvent.getMotionEvent().getPointerId(i); } break; case TouchEvent.ACTION_MOVE: this.setPosition(pSceneTouchEvent.getX() -this.getWidth() / 2, pSceneTouchEvent.getY()-this.getHeight() / 2); break; case TouchEvent.ACTION_UP: break; } return true; } }; face2.setScale(2); scene.attachChild(face2); scene.registerTouchArea(face2); scene.setTouchAreaBindingOnActionDownEnabled(true); return scene; } } This line int count = pSceneTouchEvent.getMotionEvent().getPointerCount() ; should set 2 to the count variable if i touch the sprite with to fingers, then i can apply zooming functionality (setScale method) on the sprite by getting the distance between the coordinates of two fingers. Can anyone help me? why it does not detect two fingers? and without this how can i zoom the sprite on pinch of two fingers? I am very new to game development, any help would be appreciated. Thanks in advance.

    Read the article

  • Issues glVertexAttribPointer last 2 parameters?

    - by NoobScratcher
    Introduction Hello I will start out by explaining my setup, showing samples as I go along explaining the situation. I'm using these tools: OpenGL 3.3 GLSL 330 C++ Problem The problem is when I render the wavefront obj 3d model it gives a very weird visual glitch the model was supposed to be a square but instead its a triangluated mess with parts of the vertexes pointing in a stretched direction in massive amounts towards the bottom left side of the frustum.... Explanation: I'm using std::vectors to store my wavefront .obj model data using sscanf to get the floating point values into the structure members x,y,z and store them into the Points structure variable p; int index = IndexAssigner(1, 1); ifstream file (list[index].c_str() ); points.push_back(Point()); Point p; int face[4]; while (!file.eof() ) { char modelbuffer[10000]; file.getline(modelbuffer, 10000); switch(modelbuffer[0]) { case 'v' : sscanf(modelbuffer, "v %f %f %f", &p.x, &p.y, &p.z); points.push_back(p); break; case 'f': sscanf(modelbuffer, "f %d %d %d %d", face, face+1, face+2, face+3 ); faces.push_back(face[0]); faces.push_back(face[1]); faces.push_back(face[2]); faces.push_back(face[3]); } //Turn on FileReader aka "RENDER CODE" FileReader = true; } then I render the Points vector using the .data() member of std::vectors to the frustum. Other declarations: int numfloats = 4; float* point=reinterpret_cast<float*>(&points[0]); int num_bytes=numfloats*sizeof(float); Vector declarations: struct Point {float x, y , z; }; std::vector<int>faces; std::vector<Point>points; Render code: glGenBuffers(1, &vertexbuffer); glGenTextures(1, &ModelTexture); glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer); glBindTexture(GL_TEXTURE_3D, ModelTexture); glTexImage2D(GL_TEXTURE_2D, 0,GL_RGBA, ModelSurface->w, ModelSurface->h, 0, GL_BGR, GL_UNSIGNED_BYTE, ModelSurface->pixels); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glBufferData(GL_ARRAY_BUFFER, sizeof(points), points.data(), GL_STATIC_DRAW); glVertexAttribPointer(3, 3, GL_FLOAT, GL_FALSE,num_bytes ,points.data()); glEnableVertexAttribArray(3); //Translation Process GLfloat TranslationMatrix[] = { 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0 }; //Send Translation Matrix up to the vertex shader glUniformMatrix4fv(translation, 1, TRUE, TranslationMatrix); glDrawElements( GL_QUADS, faces.size(), GL_UNSIGNED_INT, faces.data()); I tried looking at what was causing this and went through every function every parameter ,etc looked at the man pages. Then found out that it could be my glVertexAttribPointer. Here are the man pages for glVertexAttribPointer http://www.opengl.org/sdk/docs/man/xhtml/glVertexAttribPointer.xml The last 2 parameters is my problem How do I write those 2 last parameters do I try putting the data from Points into it?. glVertexAttribPointer(3, 3, GL_FLOAT, GL_FALSE,num_bytes ,points.data()); How does it work with vectors? Is it fast?* if you can not be bothered too look at the man pages here is the scripts coming from the man pages directly. Stride Specifies the byte offset between consecutive generic vertex attributes. If stride is 0, the generic vertex attributes are understood to be tightly packed in the array. The initial value is 0. Pointer Specifies a pointer to the first component of the first generic vertex attribute in the array. The initial value is 0. If you want my full source - http://ideone.com/fPfkg Thanks Again if you do read this.

    Read the article

  • Documentation and Test Assertions in Databases

    - by Phil Factor
    When I first worked with Sybase/SQL Server, we thought our databases were impressively large but they were, by today’s standards, pathetically small. We had one script to build the whole database. Every script I ever read was richly annotated; it was more like reading a document. Every table had a comment block, and every line would be commented too. At the end of each routine (e.g. procedure) was a quick integration test, or series of test assertions, to check that nothing in the build was broken. We simply ran the build script, stored in the Version Control System, and it pulled everything together in a logical sequence that not only created the database objects but pulled in the static data. This worked fine at the scale we had. The advantage was that one could, by reading the source code, reach a rapid understanding of how the database worked and how one could interface with it. The problem was that it was a system that meant that only one developer at the time could work on the database. It was very easy for a developer to execute accidentally the entire build script rather than the selected section on which he or she was working, thereby cleansing the database of everyone else’s work-in-progress and data. It soon became the fashion to work at the object level, so that programmers could check out individual views, tables, functions, constraints and rules and work on them independently. It was then that I noticed the trend to generate the source for the VCS retrospectively from the development server. Tables were worst affected. You can, of course, add or delete a table’s columns and constraints retrospectively, which means that the existing source no longer represents the current object. If, after your development work, you generate the source from the live table, then you get no block or line comments, and the source script is sprinkled with silly square-brackets and other confetti, thereby rendering it visually indigestible. Routines, too, were affected. In our system, every routine had a directly attached string of unit-tests. A retro-generated routine has no unit-tests or test assertions. Yes, one can still commit our test code to the VCS but it’s a separate module and teams end up running the whole suite of tests for every individual change, rather than just the tests for that routine, which doesn’t scale for database testing. With Extended properties, one can get the best of both worlds, and even use them to put blame, praise or annotations into your VCS. It requires a lot of work, though, particularly the script to generate the table. The problem is that there are no conventional names beyond ‘MS_Description’ for the special use of extended properties. This makes it difficult to do splendid things such ensuring the integrity of the build by running a suite of tests that are actually stored in extended properties within the database and therefore the VCS. We have lost the readability of database source code over the years, and largely jettisoned the use of test assertions as part of the database build. This is not unexpected in view of the increasing complexity of the structure of databases and number of programmers working on them. There must, surely, be a way of getting them back, but I sometimes wonder if I’m one of very few who miss them.

    Read the article

< Previous Page | 634 635 636 637 638 639 640 641 642 643 644 645  | Next Page >