Search Results

Search found 11513 results on 461 pages for 'level 2'.

Page 4/461 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • XHTML validating block level element as a link

    - by Matty F
    I need a way to make an entire DL element clickable with only one anchor tag, that validates as XHTML. As in: <a> <dl> <dt>Data term</dt> <dd>Data definition</dd> </dl> </a> This currently doesn't validate as XHTML as the anchor tag cannot contain the DL. The only way I can get it to validate is if I make two anchor tags and place them inside DT and DD. As in: <dl> <dt><a>Data term</a></dt> <dd><a>Data definition</a></dt> </dl> I'm trying to avoid this, as it would result in two href attributes requiring maintenance, introducing the possibility they could become out of sync. Suggestions?

    Read the article

  • Suggestions for implementing a dynamic 2D level

    - by Wouter
    I am working on a game that needs a level that is completely generated. Currently my approach is to draw textures for the levels pixel by pixel during the game (in XNA with SpriteBatch). This is too intensive unfortunately. The game has frame drops even when I only draw 1 level texture each draw cycle. Here is an example of the current prototype. It is a simple sidescroller with the avatar swimming through a cave. The shape of this cave will alter throughout the level (textures and physics collision shapes). You can clearly see the boundaries of the level tiles in the screenshot below. These are generated just before they move into camera view. For inspiration I looked at PixelJunk Shooter 2. These levels are obviously not generated, but some of the levels have movement. How do you guys think they implemented it? My guess is that the level and other objects in the game are actually flat 3d models, but I am not sure..

    Read the article

  • Problems with Level Architect, Citrus Engine, Flash

    - by Idan
    I am using the Citrus Engine to make a Flash game, and the Level Architect doesn't work well for me. Firstly, when I first launch it and open my project and my level, nothing is shown, no assets and not anything I have previously done with my level. To fix it, I open another project. The other project works fine, meaning I can see the assets and the level. Then I go back to the actual project I am working on, and the problem is fixed, only it does not fix the second problem: I can't add my own assests. I follow the manual and add tags like this: [Property(value="0")] But it doesn't change a thing in the level architect window (even after I close and reopen it). Any ideas? Thanks! Here's the code of the class I want to be shown in the Level Architect: package { import com.citrusengine.objects.PhysicsObject; import com.citrusengine.objects.platformer.Sensor; import flash.utils.clearTimeout; import flash.utils.setTimeout; /** * @author Aymeric */ public class Teleporter extends Sensor { [Property(value="0")] public var endX:Number=0; [Property(value="0")] public var endY:Number=0; public var object:PhysicsObject; [Property(value="0")] public var time:Number = 0; public var needToTeleport:Boolean = false; protected var _teleporting:Boolean = false; private var _teleportTimeoutID:uint; public function Teleporter(name:String, params:Object = null) { super(name, params); } override public function destroy():void { clearTimeout(_teleportTimeoutID); super.destroy(); } override public function update(timeDelta:Number):void { super.update(timeDelta); if (needToTeleport) { _teleporting = true; _teleportTimeoutID = setTimeout(_teleport, time); needToTeleport = false; } _updateAnimation(); } protected function _teleport():void { _teleporting = false; object.x = endX; object.y = endY; clearTimeout(_teleportTimeoutID); } protected function _updateAnimation():void { if (_teleporting) { _animation = "teleport"; } else { _animation = "normal"; } } } }

    Read the article

  • OPN Diamond Level Criteria Update

    - by Cinzia Mascanzoni
    On June 1, 2013, the criteria for Oracle PartnerNetwork members to attain the prestigious Diamond level will change and all members at the Diamond level at that point will be required to meet the new criteria. This change underscores the requirement for these elite partners to engage across Oracle’s broad product portfolio. Refer to the Diamond Level Requirements on the OPN Portal here for more detail.

    Read the article

  • Change Logging Level for SOA 11g

    - by James Taylor
    I’m sure there are many blogs out there that have this solution. But I seem to get asked this question a lot so I thought I would post it here for my convenience. Login to Enterprise Manager, e.g. http://localhost:7001/em Expand the SOA folder and right-click the soa-infra(soa_server1) folder and select Logs – Log Configuration Navigate to the component you want to monitor and change the log level. It is possible to change at a parent level if required It is not recommended that you set the level to FINIEST at a parent level as it will generate a lot of logging. Make sure you apply the change to take affect. Simple as that.

    Read the article

  • How difficult is it to change from Embedded programming to a high level programming [on hold]

    - by anudeep shetty
    I have a background in Computer Science. I worked on Embedded programming on Linux file systems, after I finished my Bachelor's degree, for over a year. After that I pursued my masters where most of my course choices involved working on web, java and databases. Now I have an offer to work with a company that is offering a job to work on the OS level. The company is pretty good but I am feeling that my Masters has gone to waste. I wanted to know is it common that a Computer Science major works on low-level coding and is there a possibility that I can work in this company for some years and then move onto an opportunity where I can work on high-level coding? Also is working on low-level programming a safe choice in terms of job opportunities?

    Read the article

  • A Patent for Workload Management Based on Service Level Objectives

    - by jsavit
    I'm very pleased to announce that after a tiny :-) wait of about 5 years, my patent application for a workload manager was finally approved. Background Many operating systems have a resource manager which lets you control machine resources. For example, Solaris provides controls for CPU with several options: shares for proportional CPU allocation. If you have twice as many shares as me, and we are competing for CPU, you'll get about twice as many CPU cycles), dedicated CPU allocation in which a number of CPUs are exclusively dedicated to an application's use. You can say that a zone or project "owns" 8 CPUs on a 32 CPU machine, for example. And, capped CPU in which you specify the upper bound, or cap, of how much CPU an application gets. For example, you can throttle an application to 0.125 of a CPU. (This isn't meant to be an exhaustive list of Solaris RM controls.) Workload management Useful as that is (and tragic that some other operating systems have little resource management and isolation, and frighten people into running only 1 app per OS instance - and wastefully size every server for the peak workload it might experience) that's not really workload management. With resource management one controls the resources, and hope that's enough to meet application service objectives. In fact, we hold resource distribution constant, see if that was good enough, and adjust resource distribution if that didn't meet service level objectives. Here's an example of what happens today: Let's try 30% dedicated CPU. Not enough? Let's try 80% Oh, that's too much, and we're achieving much better response time than the objective, but other workloads are starving. Let's back that off and try again. It's not the process I object to - it's that we to often do this manually. Worse, we sometimes identify and adjust the wrong resource and fiddle with that to no useful result. Back in my days as a customer managing large systems, one of my users would call me up to beg for a "CPU boost": Me: "it won't make any difference - there's plenty of spare CPU to be had, and your application is completely I/O bound." User: "Please do it anyway." Me: "oh, all right, but it won't do you any good." (I did, because he was a friend, but it didn't help.) Prior art There are some operating environments that take a stab about workload management (rather than resource management) but I find them lacking. I know of one that uses synthetic "service units" composed of the sum of CPU, I/O and memory allocations multiplied by weighting factors. A workload is set to make a target rate of service units consumed per second. But this seems to be missing a key point: what is the relationship between artificial 'service units' and actually meeting a throughput or response time objective? What if I get plenty of one of the components (so am getting enough service units), but not enough of the resource whose needed to remove the bottleneck? Actual workload management That's not really the answer either. What is needed is to specify a workload's service levels in terms of externally visible metrics that are meaningful to a business, such as response times or transactions per second, and have the workload manager figure out which resources are not being adequately provided, and then adjust it as needed. If an application is not meeting its service level objectives and the reason is that it's not getting enough CPU cycles, adjust its CPU resource accordingly. If the reason is that the application isn't getting enough RAM to keep its working set in memory, then adjust its RAM assignment appropriately so it stops swapping. Simple idea, but that's a task we keep dumping on system administrators. In other words - don't hold the number of CPU shares constant and watch the achievement of service level vary. Instead, hold the service level constant, and dynamically adjust the number of CPU shares (or amount of other resources like RAM or I/O bandwidth) in order to meet the objective. Instrumenting non-instrumented applications There's one little problem here: how do I measure application performance in a way relating to a service level. I don't want to do it based on internal resources like number of CPU seconds it received per minute - We need to make resource decisions based on externally visible and meaningful measures of performance, not synthetic items or internal resource counters. If I have a way of marking the beginning and end of a transaction, I can then measure whether or not the application is meeting an objective based on it. If I can observe the delay factors for an application, I can see which resource shortages are slowing an application enough to keep it from meeting its objectives. I can then adjust resource allocations to relieve those shortages. Fortunately, Solaris provides facilities for both marking application progress and determining what factors cause application latency. The Solaris DTrace facility let's me introspect on application behavior: in particular I can see events like "receive a web hit" and "respond to that web hit" so I can get transaction rate and response time. DTrace (and tools like prstat) let me see where latency is being added to an application, so I know which resource to adjust. Summary After a delay of a mere few years, I am the proud creator of a patent (advice to anyone interested in going through the process: don't hold your breath!). The fundamental idea is fairly simple: instead of holding resource constant and suffering variable levels of success meeting service level objectives, properly characterise the service level objective in meaningful terms, instrument the application to see if it's meeting the objective, and then have a workload manager change resource allocations to remove delays preventing service level attainment. I've done it by hand for a long time - I think that's what a computer should do for me.

    Read the article

  • How should I load level data in java?

    - by Matthew G.
    I'm setting up my engine for a certain action/arcade game to have a set of commands that would look something like this. Set landscape to grass Create rocks at ... Create player at X, Y Set goal to "Get to point X Y" Spawn enemy at X, Y I'd then have each object knowing what it has to do, and acting on its own. I've been thinking about how to store this data. External data files could be parsed by a level class, and certain objects can be spawned through that. I could also create a base level class and extend it for each level, but that'd create a large amount of classes. Another idea is to have one level parser class, but have a case for each level. This would be extremely silly and bulky, but I mention it because I found that I did this at 2 AM last night. I'm finally getting why I have to plan out my inheritances, though. RIP project. I might be completely missing another option.

    Read the article

  • Shared Object Not saving the level Progress

    - by user3536228
    I am making a flash game in which i have a variable levelState that describes the current level in which user has entered I am using SharedObject to save the progress but it does not do so first i declred a clas level variable private var levelState:Number = 1; private var mySaveData:SharedObject = SharedObject.getLocal("levelSave"); in the Main function i am checking if it is a first run of the game like below if (mySaveData.data.levelsComplete == null) { mySaveData.data.levelsComplete = 1; } and in a function where the winning condition is checked so that levelState could be increased i am usin this sharedobject to hold the value of levelState if (/*winniing condition*/) levelState++; mySaveData.data.levelsComplete = levelState; mySaveData.flush(); setNewLevel(levelState); } but when i play the game clear a level and again run the game it does not start from that level it starts from beginning.

    Read the article

  • Making a level editor for my game

    - by Sherif Maher Eaid
    I am doing a 2D sprite based game in XNA for WP7, The game logic is simple, you start at some point, you want to avoid obstacles and reach a certain goal. obviously I need to make many levels for the game to be challenging and funny. I am considering making a level editor for my game where I can be able to design the level using some kind of GUI then it translates that to a .lvl or something that the game can read and interpret that to a playable level. I am asking for an already made level editor for XNA/WP7.

    Read the article

  • How to find level of employee position using RECURSIVE COMMON TABLE EXPRESSION

    - by user309381
    ;with Ranked(Empid,Mngrid,Empnm,RN,level) As ( select Empid,Mngrid ,Empnm ,row_number() over (order by Empid)AS RN , 0 as level from dbo.EmpMngr ), AnchorRanked(Empid,Mngrid,Empnm,RN,level) AS(select Empid,Mngrid,Empnm,RN ,level from Ranked ), RecurRanked(Empid,Mngrid,Empnm,RN,level) AS(select Empid,Mngrid,Empnm,RN,level from AnchorRanked Union All select Ranked.Empid,Ranked.Mngrid,Ranked.Empnm,Ranked.RN,Ranked.level + 1 from Ranked inner join RecurRanked on Ranked.Empid = RecurRanked.Empid AND Ranked.RN = RecurRanked.RN+1) select Empid,Empnm,level from RecurRanked

    Read the article

  • I've been hired on as a entry-level game developer at a company and have little/no experience in API programming, what should I expect?

    - by Mr. Geneth
    So, I've been hired on as an entry level game developer with little/no experience working with any API other than Win32. This will be an overall learning experience for me as a person and I have gone over this multiple times with the boss and he has no problem with my inexperience. He says that if I'm not worth it now, I will be later. This gives me confidence, but I still feel that I should know a lot more before tackling this position. I would be stupid to pass it up. This is one of my favorite places to come for advice and help and have tried to just accept this, but it just keeps bothering that I can't go in knowing how to at least do the basics. I want to give the company its money's worth. Ya know? My questions are: What should I expect from the other programmers in this project (In terms of patience with me and working together, and being taught)? Is this normal? Any other advice on this sort of thing would be wonderful. I just want to feel comfortable with it.

    Read the article

  • Were the first assemblers written in machine code?

    - by The111
    I am reading the book The Elements of Computing Systems: Building a Modern Computer from First Principles, which contains projects encompassing the build of a computer from boolean gates all the way to high level applications (in that order). The current project I'm working on is writing an assembler using a high level language of my choice, to translate from Hack assembly code to Hack machine code (Hack is the name of the hardware platform built in the previous chapters). Although the hardware has all been built in a simulator, I have tried to pretend that I am really constructing each level using only the tools available to me at that point in the real process. That said, it got me thinking. Using a high level language to write my assembler is certainly convenient, but for the very first assembler ever written (i.e. in history), wouldn't it need to be written in machine code, since that's all that existed at the time? And a correlated question... how about today? If a brand new CPU architecture comes out, with a brand new instruction set, and a brand new assembly syntax, how would the assembler be constructed? I'm assuming you could still use an existing high level language to generate binaries for the assembler program, since if you know the syntax of both the assembly and machine languages for your new platform, then the task of writing the assembler is really just a text analysis task and is not inherently related to that platform (i.e. needing to be written in that platform's machine language)... which is the very reason I am able to "cheat" while writing my Hack assembler in 2012, and use some preexisting high level language to help me out.

    Read the article

  • How to deal with transport level security policy with OSB

    - by Jian Liang
    Recently, we received a use case for Oracle Service Bus (OSB) 11gPS4 to consume a Web Service which is secured by HTTP transport level security policy. The WSDL of the remote web service looks like following where the part marked in red shows the security policy: <?xml version='1.0' encoding='UTF-8'?> <definitions xmlns:wssutil="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" xmlns:wsp="http://schemas.xmlsoap.org/ws/2004/09/policy" xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/" xmlns:tns="https://httpsbasicauth" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns="http://schemas.xmlsoap.org/wsdl/" targetNamespace="https://httpsbasicauth" name="HttpsBasicAuthService"> <wsp:UsingPolicy wssutil:Required="true"/> <wsp:Policy wssutil:Id="WSHttpBinding_IPartyServicePortType_policy"> <wsp:ExactlyOne> <wsp:All> <ns1:TransportBinding xmlns:ns1="http://schemas.xmlsoap.org/ws/2005/07/securitypolicy"> <wsp:Policy> <ns1:TransportToken> <wsp:Policy> <ns1:HttpsToken RequireClientCertificate="false"/> </wsp:Policy> </ns1:TransportToken> <ns1:AlgorithmSuite> <wsp:Policy> <ns1:Basic256/> </wsp:Policy> </ns1:AlgorithmSuite> <ns1:Layout> <wsp:Policy> <ns1:Strict/> </wsp:Policy> </ns1:Layout> </wsp:Policy> </ns1:TransportBinding> <ns2:UsingAddressing xmlns:ns2="http://www.w3.org/2006/05/addressing/wsdl"/> </wsp:All> </wsp:ExactlyOne> </wsp:Policy> <types> <xsd:schema> <xsd:import namespace="https://proxyhttpsbasicauth" schemaLocation="http://localhost:7001/WS/HttpsBasicAuthService?xsd=1"/> </xsd:schema> <xsd:schema> <xsd:import namespace="https://httpsbasicauth" schemaLocation="http://localhost:7001/WS/HttpsBasicAuthService?xsd=2"/> </xsd:schema> </types> <message name="echoString"> <part name="parameters" element="tns:echoString"/> </message> <message name="echoStringResponse"> <part name="parameters" element="tns:echoStringResponse"/> </message> <portType name="HttpsBasicAuth"> <operation name="echoString"> <input message="tns:echoString"/> <output message="tns:echoStringResponse"/> </operation> </portType> <binding name="HttpsBasicAuthSoapPortBinding" type="tns:HttpsBasicAuth"> <wsp:PolicyReference URI="#WSHttpBinding_IPartyServicePortType_policy"/> <soap:binding transport="http://schemas.xmlsoap.org/soap/http" style="document"/> <operation name="echoString"> <soap:operation soapAction=""/> <input> <soap:body use="literal"/> </input> <output> <soap:body use="literal"/> </output> </operation> </binding> <service name="HttpsBasicAuthService"> <port name="HttpsBasicAuthSoapPort" binding="tns:HttpsBasicAuthSoapPortBinding"> <soap:address location="https://localhost:7002/WS/HttpsBasicAuthService"/> </port> </service> </definitions> The security assertion in the WSDL (marked in red) indicates that this is the HTTP transport level security policy which requires one way SSL with default authentication (aka. basic authenticate with username/password). Normally, there are two ways to handle web service security policy with OSB 11g: Use WebLogic 9.x policy Use OWSM Since OSB doesn’t support WebLogic 9.x WSSP transport level assertion (except for WS transport), when we tried to create the business service based on the imported WSDL, OSB complained with the following message: [OSB Kernel:398133]The service is based on WSDL with Web Services Security Policies that are not natively supported by Oracle Service Bus. Please select OWSM Policies - From OWSM Policy Store option and attach equivalent OWSM security policy. For the Business Service, either you can add the necessary client policies manually by clicking Add button or you can let Oracle Service Bus automatically pick and add compatible client policies by clicking Add Compatible button. Unfortunately, when tried with OWSM, we couldn’t find http_token_policy from OWSM since OSB PS4 doesn’t support OWSM http_token_policy. It seems that we ran into an unsupported situation that no appropriate policy can be used from both WebLogic and OWSM. As this security policy requires one way SSL with basic authentication at the transport level, a possible workaround is to meet the remote service's requirement at transport level without using web service policy. We can simply use OSB to establish SSL connection and provide username/password for authentication at the transport level to the remote web service. In this case, the business service within OSB will be transparent to the web service policy. However, we still need to deal with OSB console’s complaint related to unsupported security policy because the failure of WSDL validation prohibits OSB console to move forward. With the help from OSB Product Management team, we finally came up with the following solutions: Solution 1: OSB PS5 The good news is that the http_token_policy is made available in OSB PS5. With OSB PS5, you can simply add OWSM oracle/wss_http_token_over_ssl_client_policy to the business service. The simplest solution is to upgrade to OSB PS5 where the OWSM solution is provided out of the box. But if you are not in a position where upgrading is an immediate option, you might want to consider other two workaround solutions described below. Solution 2: Modifying WSDL This solution addresses OSB console’s complaint by removing the security policy from the imported WSDL within OSB. Without the security policy, OSB console allows the business service to be created based on modified WSDL.  Please bear in mind, modifying WSDL is done only for the OSB side via OSB console, no change is required on the remote Web Service. The main steps of this solution: Connect to OSB console import the remote WSDL into OSB remove security assertion (the red marked part) from the imported WSDL create a service account. In our sample, we simply take the user weblogic create the business service and check "Basic" for Authentication and select the created service account make sure that OSB consumes the web service via https. This solution requires modifying WSDL. It is suitable for any OSB version (10g or OSB 11g version) prior to PS5 without OWSM. However, modifying WSDL by hand is troublesome as it requires the user to remember that the original WSDL was edited.  It forces you to make the same edit each time you want to re-import the service WSDL when changes occur at the service level. This also prevents you from using UDDI to import WSDL.  Solution 3: Using original WSDL This solution keeps the WSDL intact and ignores the embedded policy by using OWSM. By design, OWSM doesn’t like WSDL with embedded security assertion. Since OWSM doesn’t provide the feature to explicitly ignore the embedded policy from a remote WSDL, in this solution, we use OWSM in a tricky way to ignore the embedded policy. Connect to OSB console import the remote WSDL into OSB create a service account create the business service in which check "Basic" for Authentication and select the created service account as the imported WSDL is intact, the OSB Kernel:398133 error is expected ignore this error message for the moment and navigate to the Policies Page of business service Select “From OWSM Policy Store” and click “Add” button, the list of policies will pop-up Here is the tricky part: select an arbitrary policy, and click “Cancel” Update and save By clicking “Cancel’ button, we didn’t add any OWSM policy to business service, but the embedded policy is ignored. Yes, this is tricky. According to Oracle OSB Product Manager, the future release of OWSM will add a button “None” which allows to ignore the embedded policy explicitly. This solution keeps the imported WSDL intact which is the big advantage over the solution 2. It is suitable for OSB 11g (version prior to PS5) domain with OWSM configured. This blog addressed the unsupported transport level web service security policy with OSB PS4. To summarize, if you are using OSB PS5 or in a position to upgrade to PS5, the recommendation is to use OWSM OOTB transport level security policy directly. With the release prior to 11g PS5, you can consider the solution 2 or 3 depending on if OWSM is configured.

    Read the article

  • How should I track approval workflow when users at every security level can create a request?

    - by Eric Belair
    I am writing a new application that allows users to enter requests. Once a request is entered, it must follow an approval workflow to be finally approved by a user the highest security level. So, let's say a user at Security Level 1 enters a request. This request must be approved by his superior - a user at Security Level 2. Once the Security Level 2 user approves it, it must be approved by a user at Security Level 3. Once the Security Level 3 user approves it, it is considered fully approved. However, users at any of the three Security Levels can enter requests. So, if a Security Level 3 user enters a request, it is automatically considered "fully approved". And, if a Security Level 2 user enters a request, it must only be approved by a Security Level 3 user. I'm currently storing each approval status in a Database Log Table, like so: STATUS_ID (PK) REQUEST_ID STATUS STATUS_DATE -------------- ------------- ---------------- ----------------------- 1 1 USER_SUBMIT 2012-09-01 00:00:00.000 2 1 APPROVED_LEVEL2 2012-09-01 01:00:00.000 3 1 APPROVED_LEVEL3 2012-09-01 02:00:00.000 4 2 USER_SUBMIT 2012-09-01 02:30:00.000 5 2 APPROVED_LEVEL2 2012-09-01 02:45:00.000 My question is, which is a better design: Record all three statuses for every request ...or... Record only the statuses needed according to the Security Level of the user submitting the request In Case 2, the data might look like this for two requests - one submitted by Security Level 2 User and another submitted by Security Level 3 user: STATUS_ID (PK) REQUEST_ID STATUS STATUS_DATE -------------- ------------- ---------------- ----------------------- 1 3 APPROVED_LEVEL2 2012-09-01 01:00:00.000 2 3 APPROVED_LEVEL3 2012-09-01 02:00:00.000 3 4 APPROVED_LEVEL3 2012-09-01 02:00:00.000

    Read the article

  • What does "cpuid level" means ? Asking just for curiosity

    - by ogzylz
    For example, I put just 2 core info of a 16 core machine. What does "cpuid level : 6" line means? If u can provide info about lines "bogomips : 5992.10" and "clflush size : 64" I will be appreciated ------------- processor : 0 vendor_id : GenuineIntel cpu family : 15 model : 6 model name : Intel(R) Xeon(TM) CPU 3.00GHz stepping : 8 cpu MHz : 2992.689 cache size : 4096 KB physical id : 0 siblings : 4 core id : 0 cpu cores : 2 fpu : yes fpu_exception : yes cpuid level : 6 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall nx lm constant_tsc pni monitor ds_cpl vmx cid cx16 xtpr lahf_lm bogomips : 5992.10 clflush size : 64 cache_alignment : 128 address sizes : 40 bits physical, 48 bits virtual power management: processor : 1 vendor_id : GenuineIntel cpu family : 15 model : 6 model name : Intel(R) Xeon(TM) CPU 3.00GHz stepping : 8 cpu MHz : 2992.689 cache size : 4096 KB physical id : 1 siblings : 4 core id : 0 cpu cores : 2 fpu : yes fpu_exception : yes cpuid level : 6 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall nx lm constant_tsc pni monitor ds_cpl vmx cid cx16 xtpr lahf_lm bogomips : 5985.23 clflush size : 64 cache_alignment : 128 address sizes : 40 bits physical, 48 bits virtual power management:

    Read the article

  • unit level testing, agile, and refactoring

    - by dsollen
    I'm working on a very agile development system, a small number of people with my doing the vast majority of progaming myself. I've gotten to the testing phase and find myself writing mostly functional level testing, which I should in theory be leavning for our tester (in practice I don't entirely...trust our tester to detect and identify defects enough to leave him the sole writter of functional tests). In theory what I should be writing is Unit level tests. However, I'm not sure it's worth the expense. Unit testing takes some time to do, more then functional testing since I have to set up mocks and plugs into smaller units that weren't design to run in issolation. More importantly, I find I refactor and redesign heavily-part of this is due to my inherriting code that needed heavy redesign and is still being cleaned up, but even once I've finished removing parts that need work I'm sure in the act of expanding the code I'll still do a decent amount of refactoring and redesign. It feels as if I will break my unit tests, forcing wasted time to refactor them as well, often due to unit test, by definition, having to be coupled so closely to the code structure. So.is it worth all the wasted time when functional tests, that will never break when I refactor/redesign, should find most defects? Do unit tests really provide that much extra defect detetection over through functional? and how does one create good unit tests that work with very quick and agile code that is modified rapidly? ps, I would be fine/happy with links to anything one considers an excellent resource for how to 'do' unit testing in a highly changing enviroment. edit: to clarify I am doing a bit of very unoffical TDD, I just seem to be writing tests on what would be considered a functional level rather then unit level. I think part of this is becaus I own nearly all of the project I don't feel I need to limit the scope as much; and part of it is that it's daunting to think of trying to go back and retroactively add the unit tests needed to cover enough code that I can feel comfortable testing only a unit without the full functionality and trust that unit still works with the rest of the units.

    Read the article

  • Low-level GPU code and Shader Compilation

    - by ktodisco
    Bear with me, because I will raise several questions at once. I still feel, though, that overall this can be treated as one question that may be answered succinctly. I recently dove into solidifying my understanding of the assembly language, low-level memory operations, CPU structure, and program optimizations. This also sparked my interest in how higher-level shading languages, GLSL and HLSL in particular, are compiled and optimized, as well as what formats they are reduced to before machine code is generated (assuming they are not converted directly into machine code). After a bit of research into this, the best resource I've found is this presentation from ATI about the compilation of and optimizations for HLSL. I also found sample ARB assembly code. This sort of addressed my original curiosity, but it raised several other questions. The assembler code in the ATI presentation seems like it contains instructions specifically targeted for the GPU, but is this merely a hypothetical example created for the purpose of conceptual understanding, or is this code really generated during shader compilation? If so, is it possible to inspect it, or even write it in place of the higher-level syntax? My initial searches for an answer to the last question tell me that this may be disallowed, but I have not dug too deep yet. Also, along the same lines, are GLSL shader programs compiled into ARB assembly code before machine code is generated, and is it possible to write direct ARB assembly? Lastly, and perhaps what I am most interested in finding out: are there comprehensive resources on shader compilation and low-level GPU code? I have been unable to find any thus far. I ask simply because I am curious :)

    Read the article

  • ClearTrace Supports Statement Level Events

    - by Bill Graziano
    One of the requests I get on a regular basis is to capture the performance of statement level events.  The latest beta has this feature available.  If you’re interested in this I’d like to get some feedback. I handle the SP:StmtCompleted and the SQL:StmtCompleted events.  These report CPU, reads, writes and duration. I’m not in any way saying it’s a good idea to trace these events.  Use with caution as this can make your traces much larger. If there are statement level events in the trace file they will be processed.  However the query screen displays batch level *OR* statement level events.  If it did both we’d be double counting. I don’t have very many traces with statement completed events in them.  That means I only did limited testing of how it parses these events.  It seems to work well so far though.  Your feedback is appreciated. If you ever write loops or cursors in stored procedures you’re going to get huge trace files.  Be warned. I also fixed an annoying bug where ClearTrace would fail and tell you a value had already been added.  This is a result of the collection I use being case-sensitive and SQL Server not being case-sensitive.  I thought I had properly coded around that but finally realized I hadn’t.  It should be fixed now. If you have any questions or problems the ClearTrace support forum is the best place for those.

    Read the article

  • SQL SERVER – Check the Isolation Level with DBCC useroptions

    - by pinaldave
    In recent consultancy project coordinator asked me – “can you tell me what is the isolation level for this database?” I have worked with different isolation levels but have not ever queried database for the same. I quickly looked up bookonline and found out the DBCC command which can give me the same details. You can run the DBCC UserOptions command on any database to get few details about dateformat, datefirst as well isolation level. DBCC useroptions Set Option                  Value --------------------------- -------------- textsize                    2147483647 language                    us_english dateformat                  mdy datefirst                   7 lock_timeout                -1 quoted_identifier           SET arithabort                  SET ansi_null_dflt_on           SET ansi_warnings               SET ansi_padding                SET ansi_nulls                  SET concat_null_yields_null     SET isolation level             read committed I thought this was very handy script, which I have not used earlier. Thanks Gary for asking right question. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL System Table, SQL Tips and Tricks, T SQL, Technology Tagged: Transaction Isolation

    Read the article

  • Learning computer architecture as a programmer

    - by Samaursa
    I typically run across gurus at SO and other places (instructors, book authors etc.) that would say something along the lines "This will cause alignment issues" or other low level tidbits. I want to learn about all these tidbits that are relevant to programming. Now usually when I see low level books (computer architecture books for example) they are too low level and geared towards people whose primary area of interest is computer architecture and not software design. Do you have recommendations for books that go through low-level stuff that is relevant to programmers?

    Read the article

  • Level Editor + Game -> Duplicating rendering/game specific code?

    - by Utkarsh Sinha
    I've been reading about how to design code for a game. One thing I haven't been able to figure out is - how do you manage writing an outside-game level editor (not an 'in-game level editor') without 'copying' code from the game? For example, you might have to copy all code about the different types of entities you can have. You'll have to add the game rendering code. My guess is this can be done by making a DLL out of the 'engine' part of the game. Then, share it between the actual game and the level editor. Or is there a better/easier way to do this?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >