Search Results

Search found 24043 results on 962 pages for 'private methods'.

Page 596/962 | < Previous Page | 592 593 594 595 596 597 598 599 600 601 602 603  | Next Page >

  • Booby Traps and Locked-in Kids: An Interview with a Safecracker

    - by Jason Fitzpatrick
    While most of our articles focus on security of the digital sort, this interview with a professional safecracker is an interesting look the physical side of securing your goods. As part of their Interviews with People Who Have Interesting or Unusual Jobs series over at McSweeney’s, they interviewed Ken Doyle, a professional a locksmithing and safecracking veteran with 30 years of industry experience. The interview is both entertaining and an interesting read. One of the more unusual aspects of safecracking he highlights: Q: Do you ever look inside? A: I NEVER look. It’s none of my business. Involving yourself in people’s private affairs can lead to being subpoenaed in a lawsuit or criminal trial. Besides, I’d prefer not knowing about a client’s drug stash, personal porn, or belly button lint collection. When I’m done I gather my tools and walk to the truck to write my invoice. Sometimes I’m out of the room before they open it. I don’t want to be nearby if there is a booby trap. Q: Why would there be a booby trap? A: The safe owner intentionally uses trip mechanisms, explosives or tear gas devices to “deter” unauthorized entry into his safe. It’s pretty stupid because I have yet to see any signs warning a would-be culprit about the danger. HTG Explains: Why Linux Doesn’t Need Defragmenting How to Convert News Feeds to Ebooks with Calibre How To Customize Your Wallpaper with Google Image Searches, RSS Feeds, and More

    Read the article

  • How to wire finite state machine into component-based architecture?

    - by Pup
    State machines seem to cause harmful dependencies in component-based architectures. How, specifically, is communication handled between a state machine and the components that carry out state-related behavior? Where I'm at: I'm new to component-based architectures. I'm making a fighting game, although I don't think that should matter. I envision my state machine being used to toggle states like "crouching", "dashing", "blocking", etc. I've found this state-management technique to be the most natural system for a component-based architecture, but it conflicts with techniques I've read about: Dynamic Game Object Component System for Mutable Behavior Characters It suggests that all components activate/deactivate themselves by continually checking a condition for activation. I think that actions like "running" or "walking" make sense as states, which is in disagreement with the accepted response here: finite state machine used in mario like platform game I've found this useful, but ambiguous: How to implement behavior in a component-based game architecture? It suggests having a separate component that contains nothing but a state machine. But, this necessitates some kind of coupling between the state machine component and nearly all the other components. I don't understand how this coupling should be handled. These are some guesses: A. Components depend on state machine: Components receive reference to state machine component's getState(), which returns an enumeration constant. Components update themselves regularly and check this as needed. B. State machine depends on components: The state machine component receives references to all the components it's monitoring. It queries their getState() methods to see where they're at. C. Some abstraction between them Use an event hub? Command pattern? D. Separate state objects that reference components State Pattern is used. Separate state objects are created, which activate/deactivate a set of components. State machine switches between state objects. I'm looking at components as implementations of aspects. They do everything that's needed internally to make that aspect happen. It seems like components should function on their own, without relying on other components. I know some dependencies are necessary, but state machines seem to want to control all of my components.

    Read the article

  • Cannot ping my domain-joined server - Can only ping domain controller - host unreachable

    - by Vazgen
    I have a HyperV Server hosting a Domain Controller VM (192.168.1.50) and another VM (192.168.1.51) joined to this domain. I have: domain controller as DNS server forward lookup zone for the domain with host record for 192.168.1.50 and 192.168.1.51 Windows client has primary DNS server set to 192.168.1.50 and secondary to my ISP I can ping 192.168.1.50 (domain controller) successfully but cannot ping 192.168.1.51 (domain-joined VM) When pinging from Windows client: ping 192.168.1.51 Reply from 192.168.1.129 : Destination host unreachable When pinging from Domain Controller: ping 192.168.1.51 Reply from 192.168.1.50 : Destination host unreachable I have 2 virtual network adapters one PRIVATE for intranet (set to static IP 192.168.1.51) and one PUBLIC for internet with a dynamic IP. I noticed the the PUBLIC one inherited the "mydomain.com" domain subtitle after joining the domain... I don't know what this meant but it seemed more intuitive to me to switch THIS ONE to have the static IP. After I configured that I still could not ping but now I get: ping 192.168.1.51 Request timed out What seems to be the issue, I'm relatively new to networking.

    Read the article

  • Most secure way of connecting an intranet to an external server

    - by Eitan
    I have an internal server that hosts an asp.net intranet application. I want to keep it completely and utterly secure and private however we need to expose some information through a WCF service to another server which hosts our external websites which CAN be accessed by the public. What is the best way to pass information between the two servers with regards to an IT setup, while keeping the intranet in house server completely secure and inaccessible? I've heard VPN was the way to go but I wanted to be sure this was the safest way. Another question what would be the most secure way of passing data in the WCF service?

    Read the article

  • Two Apache Server Root

    - by Sithu Kyaw
    I am using Apache Friends (XAMPP). I installed it under C: drive. Its path is C:\xampp\ Its default root is C:\xampp\htdocs.Thus, all programs need to reside in C:\xampp\htdocs\ so that we can run http://localhost/myapp/ PhpMyAdmin comes along with XAMPP, but it resides in C:\xampp\ and it can be run from /localhost/phpMyAdmin/. When my application is moved to C:\xampp\, I cannot run it /localhost/myapp. I would like to have two server root C:\xampp\ and C:\xampp\htdocs\ so that I can separate my private apps and public apps in different folders. And both can be run from http://localhost/ such as /localhost/myprivateapp/ and /localhost/mypublicapp/ How can I do that ? I'm on Windows XP.

    Read the article

  • ASP.NET MVC 4: Short syntax for script and style bundling

    - by DigiMortal
    ASP.NET MVC 4 introduces new methods for style and scripts bundling. I found something brilliant there I want to introduce you. In this posting I will show you how easy it is to include whole folder with stylesheets or JavaScripts to your page. I’m using ASP.NET MVC 4 Internet Site template for this example. When we open layout pages located in shared views folder we can see something like this in layout file header: <link href="@System.Web.Optimization.BundleTable.Bundles.ResolveBundleUrl("~/Content/css")" rel="stylesheet" type="text/css" />    <link href="@System.Web.Optimization.BundleTable.Bundles.ResolveBundleUrl("~/Content/themes/base/css")" rel="stylesheet" type="text/css" />    <script src="@System.Web.Optimization.BundleTable.Bundles.ResolveBundleUrl("~/Scripts/js")"></script> Let’s take the last line and modify it so it looks like this: <script src="/Scripts/js"></script> After saving the layout page let’s run browser and see what is coming in over network. As you can see the request to folder ended up with result code 200 which means that request was successful. 327.2KB was received and it is not mark-up size for error page or directory index. Here is the body of response: I scrolled down to point where one script ends and another one starts when I made the screenshot above. All scripts delivered with ASP.NET MVC project templates start with this green note. So now we can be sure that the request to scripts folder ended up with bundled script and not with something else. Conclusion Script and styles bundling uses currently by default long syntax where bundling is done through Bundling class. We can still avoid those long lines and use extremely short syntax for script and styles bundling – we just write usual script or link tag and give folder URL as source. ASP.NET MVC 4 is smart enough to combine styles or scripts when request like this comes in.

    Read the article

  • Would firewire networking be better than 100Megabit ethernet?

    - by Josh
    My office network has a fully switched 1000Megabit ethernet network. I have an Apple iMac with a Gigabit NIC and FireWire, and a Compaq laptop with a 100Megabit NIC and a 4-pin FireWire interface. Accessing my office's shared drives using my laptop is (obviously) much loswer using my laptop than my iMac. Would I see a noticeible performance boost if I enabled Internet Connection Sharing on my iMac and shared the private ethernet network from my iMac with my laptop over FireWire? FireWire is 480Mbit/sec, right? So would I see roughly 4x speed improvement with such a setup?

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 24 (sys.dm_db_index_operational_stats)

    - by Tamarick Hill
    The sys.dm_db_index_operational_stats Dynamic Management Function returns information about the IO, locking, and access methods for the indexes that you currently have on your SQL Server Instance. This function takes four input parameters which are (1) database_id, (2) object_id, (3) index_id, and (4) partition_number. Let’s have a look at the results from this function against our AdventureWorks2012 database. This function returns a ton of columns, so not only will I not attempt to describe each of the columns, I wont even attempt to display all of them here. My query below will give you a subset of the columns returned from this function. SELECT database_id, object_id, index_id, partition_number, leaf_insert_count, leaf_delete_count, leaf_update_count, leaf_ghost_count, nonleaf_insert_count, nonleaf_delete_count, nonleaf_update_count, range_scan_count, forwarded_fetch_count, row_lock_count, row_lock_wait_count, page_lock_count, page_lock_wait_count, Index_lock_promotion_attempt_count, index_lock_promotion_count, page_compression_attempt_count, page_compression_success_count FROM sys.dm_db_index_operational_stats(db_id('AdventureWorks2012'), NULL, NULL, NULL) The first four columns in the result set represent the values that we passed in as our input parameters. If you use NULL’s as I did, then you will see results for every index on your system. I specified a database_id so my result set only shows those records pertaining to my AdventureWorks2012 database. The next columns in the result set provide you with information on how may inserts, deletes, or updates that have taken place on your leaf and nonleaf index levels. The nonleaf levels would refer to the intermediate and root index levels. In the middle of these you see a leaf_ghost_count column, which represents the number of records that have been logically deleted and marked as “ghosted”  and are waiting on the background ghost cleanup process to physically remove them. The range_scan_count column represents the number of range or table scans that have been performed against an index. The forwarded_fetch_count column represents the number of rows that were returned from a forwarding row pointer. The row_lock_count and row_lock_wait_count represent the number of row locks that have been requested for an index and the number of times SQL has had to wait on a row lock respectively. The page_lock_count and page_lock_wait_count represent the number of page locks that have been requested for an index and the number of times SQL has had to wait on a page lock respectively. The index_lock_promotion_attempt_count represents the number of times the database engine has attempted to promote a lock to the index level. The index_lock_promotion_count column displays how many times that index lock promotion was successful. Lastly the page_compression_attempt_count and page_compression_success_count represents how many times a page was attempted to be compressed and how many times the attempt was successful. As you can see there is a ton of information returned from this DMV. The DMV we reviewed on yesterday (sys.dm_db_index_usage_stats) provided you with good information on when and how indexes have been used, but this DMF takes an even deeper dive into these statistics. If you are interested in performing a very detailed analysis on the operational stats of your indexes, this is not only a good place to start, but more than likely the best place. For more information on this Dynamic Management Function, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/ms174281.aspx Follow me on Twitter @PrimeTimeDBA

    Read the article

  • Odd Profiler Results with EF4

    - by AjarnMark
    I have been doing some testing of using the Microsoft Entity Framework 4 with stored procedures and ran across some really odd results in SQL Server Profiler. The application that is running which uses Entity Framework 4 is a simple Web Application written in C#, and the Entity Data Model is actually contained in a referenced class library of its own.  I’ll write more about my experiences with this later.  For now the question is, why does SQL Profiler think that the stored procedure is running in Master, and not in my application database? While analyzing the effects of using custom helper methods on my EDM classes to call the stored procedure, I decided to run Profiler while I stepped through the code so that I had a clear understanding of exactly when and what calls were made to the SQL Server.  I ran Profiler switching back and forth between the TSQL and TSQL_SP templates.  However, to reduce the amount of results rows I needed to wade through, I set a filter on DatabaseID to be equal to my application’s database.  Each time I ran this, the only thing that I saw was an Audit:Login to the database, but no procedure or T-SQL statements executed, yet I was definitely getting results back to my web page.  I tried other Profiler templates, still filtering on DatabaseID (tangent: I found, at least back in SQL 2000 Profiler, that filtering on DatabaseID was more reliable than filtering on DatabaseName.  Even though I’m now running SQL 2008, that habit sticks with me).  Still no results other than the Login.  Very weird! Finally, I decided to run Profiler with no filtering and discovered that that lines which represent my stored procedure and its T-SQL commands are all marked with DatabaseID = 1, which is Master.  Why in the world would that be?  My procedure is definitely in the application database, and not in Master, and there is nothing funny about the call to the procedure evident in Profiler (i.e. it is not called as MyAppDB.dbo.MyProcName, but rather just dbo.MyProcName).  There must be something funny with the way the Entity Framework is wrapping this call, and I don’t like it…I don’t like it one bit.  My primary PROD server contains 40+ databases on it, and when I need to profile something, I expect to be able to filter based on DatabaseID (for the record, I displayed DatabaseName in my results, too, and it also shows Master). I find the same pattern of everything except the Login showing up as being in Master when I run my version that uses standard LINQ to Entities instead of stored procedures, so that suggests it is not my code, but rather something funny with SQL Server 2008 Profiler or the Entity Framework. If you have any ideas about why this might be so, please comment below.

    Read the article

  • When done is not done

    - by Tony Davis
    Most developers and DBAs will know what it’s like to be asked to do "a quick tidy up" on a project that, on closer inspection, turns out to be a barely working prototype: as the cynical programmer says, "when you’re told that a project is 90% done, prepare for the next 90%". It is easy to convince a layperson that an application is complete just by using test data, and sticking to the workflow that the development team has implemented and tested. The application is ‘done’ only in the sense that the anticipated paths through the software features, using known data, are fully supported. Reality often strikes only when testers reveal its strange and erratic behavior in response to behavior from the end user that strays from the "ideal". The problem is this: how do we measure progress, accurately and objectively? Development methods such as Scrum or Kanban, when implemented rigorously, can mitigate these problems for developers, to some extent. They force a team to progress one small, but complete feature at a time, to find out how long it really takes for this feature to be "done done"; in other words done to the point where its performance and scalability is understood, it is tested for all conceivable edge cases and doesn’t break…it is ready for prime time. At that point, the team has a much more realistic idea of how long it will take them to really complete all the remaining features, and so how far away the end is. However, it is when software crosses team boundaries that we feel the limitations of such techniques. No matter how well drilled the development team is, problems will still arise if they don’t deploy frequently to a production environment. If they work feverishly for months on end before finally tossing the finished piece of software over the fence for the DBA to deploy to the "real world" then once again will dawn the realization that "done done" is still out of reach, as the DBA uncovers poorly code transactions, un-scalable queries, inefficient caching, and so on. By deploying regularly, end users will also have a much earlier opportunity to tell you how far what you implemented strayed from what they wanted. If you have a tale to tell, anonymized of course, of a "quick polish" project that turned out to be anything but, and what the major problems were, please do share it. Cheers, Tony.

    Read the article

  • Oracle Sun Solaris 11.1 Completes EAL4+ Common Criteria Evaluation

    - by Joshua Brickman-Oracle
    Oracle is pleased to announce that the Oracle Solaris 11.1 operating system has achieved a Common Criteria certification at Evaluation Assurance Level (EAL) 4 augmented by Flaw Remediation under the Canadian Communications Security Establishment’s (CSEC) Canadian Common Criteria  Scheme (CCCS).  EAL4 is the highest level achievable for commercial software, and is the highest level mutually recognized by 26 countries under the current Common Criteria Recognition Arrangement (CCRA).  Oracle Solaris 11.1 is conformant to the BSI Operating System Protection Profile v2.0 with the following four extended packages. (1) Advanced Management, (2) Extended Identification and Authentication, (3) Labeled Security, and (4) Virtualization. Common Criteria is an international framework (ISO/IEC 15408) which defines a common approach for evaluating security features and capabilities of Information Technology security products. A certified product is one that a recognized Certification Body asserts as having been evaluated by a qualified, accredited, and independent evaluation laboratory competent in the field of IT security evaluation to the requirements of the Common Criteria and Common Methodology for Information Technology Security Evaluation. Oracle Solaris is the industry’s most widely deployed UNIXtm operating system, delivers mission critical cloud infrastructure with built-in virtualization, simplified software lifecycle management, cloud scale data management, and advanced protection for public, private, and hybrid cloud environments. It provides a suite of technologies and applications that create an operating system with optimal performance. Oracle Solaris 11.1 includes key technologies such as Trusted Extensions, the Oracle Solaris Cryptographic Framework, Zones, the ZFS File System, Image Packaging System (IPS), and multiple boot environments. The Oracle Solaris 11.1 Certification Report and Security Target can be viewed on the Communications Security Establishment Canada (CSEC) site and on the Common Criteria Portal. For more information on Oracle’s participation in the Common Criteria program, please visit the main Common Criteria information page here: (http://www.oracle.com/technetwork/topics/security/oracle-common-criteria-095703.html) For a complete list of Oracle products with Common Criteria certifications and FIPS 140-2 validations, please see the Security Evaluations website here: (http://www.oracle.com/technetwork/topics/security/security-evaluations-099357.html).

    Read the article

  • Is the science of Computer Science dead?

    - by Veaviticus
    Question : Is the science and art of CS dead? By that I mean, the real requirements to think, plan and efficiently solve problems seems to be falling away from CS these days. The field seems to be lowering the entry-barrier so more people can 'program' without having to learn how to truly program. Background : I'm a recent graduate with a BS in Computer Science. I'm working a starting position at a decent sized company in the IT department. I mostly do .NET and other Microsoft technologies at my job, but before this I've done Java stuff through internships and the like. I personally am a C++ programmer for my own for-fun projects. In Depth : Through the work I've been doing, it seems to me that the intense disciplines of a real science don't exist in CS anymore. In the past, programmers had to solve problems efficiently in order for systems to be robust and quick. But now, with the prevailing technologies like .NET, Java and scripting languages, it seems like efficiency and robustness have been traded for ease of development. Most of the colleagues that I work with don't even have degrees in Computer Science. Most graduated with Electrical Engineering degrees, a few with Software Engineering, even some who came from tech schools without a 4 year program. Yet they get by just fine without having the technical background of CS, without having studied theories and algorithms, without having any regard for making an elegant solution (they just go for the easiest, cheapest solution). The company pushes us to use Microsoft technologies, which take all the real thought out of the matter and replace it with libraries and tools that can auto-build your project for you half the time. I'm not trying to hate on the languages, I understand that they serve a purpose and do it well, but when your employees don't know how a hash-table works, and use the wrong sorting methods, or run SQL commands that are horribly inefficient (but get the job done in an acceptable time), it feels like more effort is being put into developing technologies that coddle new 'programmers' rather than actually teaching people how to do things right. I am interested in making efficient and, in my opinion, beautiful programs. If there is a better way to do it, I'd rather go back and refactor it than let it slide. But in the corporate world, they push me to complete tasks quickly rather than elegantly. And that really bugs me. Is this what I'm going to be looking forward to the rest of my life? Are there still positions out there for people who love the science and art of CS rather than just the paycheck? And on the same note, here's a good read if you haven't seen it before The Perils Of Java Schools

    Read the article

  • OpenGL texture on sphere

    - by Cilenco
    I want to create a rolling, textured ball in OpenGL ES 1.0 for Android. With this function I can create a sphere: public Ball(GL10 gl, float radius) { ByteBuffer bb = ByteBuffer.allocateDirect(40000); bb.order(ByteOrder.nativeOrder()); sphereVertex = bb.asFloatBuffer(); points = build(); } private int build() { double dTheta = STEP * Math.PI / 180; double dPhi = dTheta; int points = 0; for(double phi = -(Math.PI/2); phi <= Math.PI/2; phi+=dPhi) { for(double theta = 0.0; theta <= (Math.PI * 2); theta+=dTheta) { sphereVertex.put((float) (raduis * Math.sin(phi) * Math.cos(theta))); sphereVertex.put((float) (raduis * Math.sin(phi) * Math.sin(theta))); sphereVertex.put((float) (raduis * Math.cos(phi))); points++; } } sphereVertex.position(0); return points; } public void draw() { texture.bind(); gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); gl.glVertexPointer(3, GL10.GL_FLOAT, 0, sphereVertex); gl.glDrawArrays(GL10.GL_TRIANGLE_FAN, 0, points); gl.glDisableClientState(GL10.GL_VERTEX_ARRAY); } My problem now is that I want to use this texture for the sphere but then only a black ball is created (of course because the top right corner s black). I use this texture coordinates because I want to use the whole texture: 0|0 0|1 1|1 1|0 That's what I learned from texturing a triangle. Is that incorrect if I want to use it with a sphere? What do I have to do to use the texture correctly?

    Read the article

  • ArchBeat Top 10 for November 11-17, 2012

    - by Bob Rhubart
    The Top 10 most popular items shared on the OTN ArchBeat Facebook page for the week of November 11-17, 2012. Developing and Enforcing a BYOD Policy Darin Pendergraft's post includes links to a recent Mobile Access Policy Survey by SANS as well as registration information for a Nov 15 webcast featuring security expert Tony DeLaGrange from Secure Ideas, SANS instructor, attorney and technology law expert Ben Wright, and Oracle IDM product manager Lee Howarth. This Week on the OTN Architect Community Homepage Make time to check out this week's features on the OTN Solution Architect Homepage, including: SOA Practitioner Guide: Identifying and Discovering Services Technical article by Yuli Vasiliev on Setting Up, Configuring, and Using an Oracle WebLogic Server Cluster The conclusion of the 3-part OTN ArchBeat Podcast on Future-Proofing your career. WLST Starting and Stopping a WebLogic Environment | Rene van Wijk Oracle ACE Rene van Wijk explores how to start a server with as little input as possible. Cloud Integration White Paper | Bruce Tierney Bruce Tierney shares an overview of Cloud Integration - A Comprehensive Solution, a new white paper he co-authored with David Baum, Rajesh Raheja, Bruce Tierney, and Vijay Pawar. X.509 Certificate Revocation Checking Using OCSP protocol with Oracle WebLogic Server 12c | Abhijit Patil Abhijit Patil's article focuses on how to use X.509 Certificate Revocation Checking Functionality with the OCSP protocol to validate in-bound certificates. Although this article focuses on inbound OCSP validation using OCSP, Oracle WebLogic Server 12c also supports outbound OCSP validation. Update on My OBIEE / Exalytics Books | Mark Rittman Oracle ACE Director Mark Rittman shares several resources related to his books Oracle Business Intelligence 11g Developers Guide and Oracle Exalytics Revealed, including a podcast interview with Oracle's Paul Rodwick. E-Business Suite 12.1.3 Data Masking Certified with Enterprise Manager 12c | Elke Phelps "You can use the Oracle Data Masking Pack with Oracle Enterprise Manager Grid Control 12c to scramble sensitive data in cloned E-Business Suite environments," reports Elke Phelps. There's a lot more information about this announcement in Elke's post. WebLogic Application Server: free for developers! | Bruno Borges Java blogger Bruno Borges shares news about important changes in the license agreement for Oracle WebLogic Server. Agile Architecture | David Sprott "There is ample evidence that Agile Architecture is a primary contributor to business agility, yet we do not have a well understood architecture management system that integrates with Agile methods," observes David Sprott in this extensive post. My iPad & This Cloud Thing | Floyd Teter Oracle ACE Director Floyd Teter explains why the Cloud is making it possible for him to use his iPad for tasks previously relegated to his laptop, and why this same scenario is likely to play out for a great many people. Thought for the Day "In programming, the hard part isn't solving problems, but deciding what problems to solve." — Paul Graham Source: SoftwareQuotes.com

    Read the article

  • VISIT ORACLE LINUX PAVILION @ORACLE OPENWORLD

    - by Zeynep Koch
    Back by popular demand, Oracle will again host the Oracle Linux Pavilionat Oracle OpenWorld from October 1-3. The pavilion will be located in the Exhibition Hall at Moscone South, Booth 1033, next to the Oracle DEMOgrounds and Oracle Linux demopods. At the pavilion a select group of ISVs, IHVs, and SIs will showcase their products that have been Oracle Linux- and/or Oracle VM-certified. These certified products enable customer applications to run faster, thereby saving money.Partners exhibiting their solutions in the Oracle Linux Pavilion include: BeyondTrust: context-aware security intelligence for dynamic IT infrastructures such as cloud, mobile, and virtual technologies Centrify: control, secure, and audit access to cross-platform systems, mobile devices, and applications Data Intensity: cloud services and application management Fujitsu: technology platforms, private cloud, services, ubiquitous and device solutions HP: converged cloud, converged infrastructure, application transformation, and information optimization LSI: intelligent solid-state storage solutions for breakthrough database acceleration Mellanox: InfiniBand and Ethernet end-to-end server and storage interconnect solutions and services for data centers Micro Focus: mainframe solutions, application modernization and development tools, software quality tools NetApp: storage and data management QLogic: high performance networking Teleran: BI and data warehouse management solutions for Oracle Exadata Database Machine and Oracle Database Be sure to pick up your free Oracle Linux and Oracle VM DVD Kit if you visit one of these partners. And speaking of free, be sure to stop by for some cool treats, courtesy of sponsor QLogic: Smoothie Bar on Monday, October 1 from 2:30 p.m. - 5:30p.m. Ice Cream Social on Wednesday, October 3 from 1:00 p.m. - 2:00 p.m. We look forward to seeing you at the pavilion.

    Read the article

  • Thinktecture.IdentityModel: WRAP and SWT Support

    - by Your DisplayName here!
    The latest drop of Thinktecture.IdentityModel contains some helpers for the Web Resource Authorization Protocol (WRAP) and Simple Web Tokens (SWT). WRAP The WrapClient class is a helper to request SWT tokens via WRAP. It supports issuer/key, SWT and SAML input credentials, e.g.: var client = new WrapClient(wrapEp); var swt = client.Issue(issuerName, issuerKey, scope); All Issue overrides return a SimpleWebToken type, which brings me to the next helper class. SWT The SimpleWebToken class wraps a SWT token. It combines a number of features: conversion between string format and CLR type representation creation of SWT tokens validation of SWT token projection of SWT token as IClaimsIdentity helpers to embed SWT token in headers and query strings The following sample code generates a SWT token using the helper class: private static string CreateSwtToken() {     var signingKey = "wA…";     var audience = "http://websample";     var issuer = "http://self";       var token = new SimpleWebToken(       issuer, audience, Convert.FromBase64String(signingKey));     token.AddClaim(ClaimTypes.Name, "dominick");     token.AddClaim(ClaimTypes.Role, "Users");     token.AddClaim(ClaimTypes.Role, "Administrators");     token.AddClaim("simple", "test");       return token.ToString(); }

    Read the article

  • Separating physics and game logic from UI code

    - by futlib
    I'm working on a simple block-based puzzle game. The game play consists pretty much of moving blocks around in the game area, so it's a trivial physics simulation. My implementation, however, is in my opinion far from ideal and I'm wondering if you can give me any pointers on how to do it better. I've split the code up into two areas: Game logic and UI, as I did with a lot of puzzle games: The game logic is responsible for the general rules of the game (e.g. the formal rule system in chess) The UI displays the game area and pieces (e.g. chess board and pieces) and is responsible for animations (e.g. animated movement of chess pieces) The game logic represents the game state as a logical grid, where each unit is one cell's width/height on the grid. So for a grid of width 6, you can move a block of width 2 four times until it collides with the boundary. The UI takes this grid, and draws it by converting logical sizes into pixel sizes (that is, multiplies it by a constant). However, since the game has hardly any game logic, my game logic layer [1] doesn't have much to do except collision detection. Here's how it works: Player starts to drag a piece UI asks game logic for the legal movement area of that piece and lets the player drag it within that area Player lets go of a piece UI snaps the piece to the grid (so that it is at a valid logical position) UI tells game logic the new logical position (via mutator methods, which I'd rather avoid) I'm not quite happy with that: I'm writing unit tests for my game logic layer, but not the UI, and it turned out all the tricky code is in the UI: Stopping the piece from colliding with others or the boundary and snapping it to the grid. I don't like the fact that the UI tells the game logic about the new state, I would rather have it call a movePieceLeft() method or something like that, as in my other games, but I didn't get far with that approach, because the game logic knows nothing about the dragging and snapping that's possible in the UI. I think the best thing to do would be to get rid of my game logic layer and implement a physics layer instead. I've got a few questions regarding that: Is such a physics layer common, or is it more typical to have the game logic layer do this? Would the snapping to grid and piece dragging code belong to the UI or the physics layer? Would such a physics layer typically work with pixel sizes or with some kind of logical unit, like my game logic layer? I've seen event-based collision detection in a game's code base once, that is, the player would just drag the piece, the UI would render that obediently and notify the physics system, and the physics system would call a onCollision() method on the piece once a collision is detected. What is more common? This approach or asking for the legal movement area first? [1] layer is probably not the right word for what I mean, but subsystem sounds overblown and class is misguiding, because each layer can consist of several classes.

    Read the article

  • Scrum and Team Consolidation

    - by John K. Hines
    I’m still working my way through one of the more painful team consolidations of my career.  One thing that’s made it hard was my assumption that the use of Agile methods and Scrum would make everything easy.  Take three teams, make all work visible, track it, and presto: An efficient, functioning software development team. What I’ve come to realize is that the primary benefit of Scrum is that Scrum brings teams closer to their customers.  Frequent meetings, short iterations, and phased deployments are all meant to keep the customer in the loop.  It’s true that as teams become proficient with Scrum they tend to become more efficient.  But I don’t think it’s true that Scrum automatically helps people work together. Instead, Scrum can point out when teams aren’t good at working together.   And it really illustrates when teams, especially teams in sustaining mode, are reacting to their customers instead of innovating with them.  At the moment we’ve inherited a huge backlog of tools, processes, and personalities.  It’s up to us to sort them all out.  Unfortunately, after 7 &frac12; months we’re still sorting. What I’d recommend for any blended team is to look at your current product lifecycles and work on a single lifecycle for all work.  If you can’t objectively come up with one process, that’s a good indication that the new team might not be a good fit for being a single unit (which happens all the time in bigger companies).  Go ahead & self-organize into sub-teams.  Then repeat the process. If you can come up with a single process, tackle each piece and standardize all of them.  Do this as soon as possible, as it can be uncomfortable.  Standardize your requirements gathering and tracking, your exploration and technical analysis, your project planning, development standards, validation and sustaining processes.  Standardize all of it.  Make this your top priority, get it out of the way, and get back to work. Lastly, managers of blended teams should realize what I’m suggesting is a disruptive process.  But you’ve just reorganized the team is already disrupted.   Don’t pull the bandage off slowly and force the team through a prolonged transition phase, lowering their productivity over the long term.  You can role model leadership to your team and drive a true consolidation.  Destroy roadblocks, reassure those on your team who are afraid of change, and push forward to create something efficient and beautiful.  Then use Scrum to reengage your customers in a way that they’ll love. Technorati tags: Scrum Scrum Process

    Read the article

  • OSX: DNS records won't change?

    - by Marko
    I've had two sites on my local host and have had mysite1.local and mysite2.local in my /etc/hosts file to pointing in my localhost. Now, I moved those sites into my homeserver (Ubuntu, Local network) and made changes to hosts file + also in /private/etc/hosts is same. 192.168.0.50 mysite1.local 192.168.0.50 mysite2.local I flushed my dnscache sudo dscacheutil -flushcache rebooted machine, reseted safari but still no changes.. Still if I try to go either mysite1.local or mysite2.local it is pointing to localhost?!? What could be the problem?? OS is Snow Leopard 10.6.8

    Read the article

  • EC2 custom topology

    - by Methos
    Is there any way to create a desired topology of EC2 instances? For example, can I create a 3 node topology of nodes A, B, C where C gets the public IP address and B and A are connected to it. Something like: Internet <-- C <-- B <-- A B and A only get private IP addresses and there is no way for the traffic to reach A before hitting B and C. This means I can install whatever I want to install on C and B to filter, cache etc. I'm going through EC2 documentation but so far I have not seen anything that talks about it. I will really appreciate if anyone knows how to do this on EC2

    Read the article

  • Proper network configuration for a KVM guest to be on the same networks at the host

    - by Steve Madsen
    I am running a Debian Linux server on Lenny. Within it, I am running another Lenny instance using KVM. Both servers are externally available, with public IPs, as well as a second interface with private IPs for the LAN. Everything works fine, except the VM sees all network traffic as originating from the host server. I suspect this might have something to do with the iptables-based firewall I'm running on the host. What I'd like to figure out is: how to I properly configure the host's networking such that all of these requirements are met? Both host and VMs have 2 network interfaces (public and private). Both host and VMs can be independently firewalled. Ideally, VM traffic does not have to traverse the host firewall. VMs see real remote IP addresses, not the host's. Currently, the host's network interfaces are configured as bridges. eth0 and eth1 do not have IP addresses assigned to them, but br0 and br1 do. /etc/network/interfaces on the host: # The primary network interface auto br1 iface br1 inet static address 24.123.138.34 netmask 255.255.255.248 network 24.123.138.32 broadcast 24.123.138.39 gateway 24.123.138.33 bridge_ports eth1 bridge_stp off auto br1:0 iface br1:0 inet static address 24.123.138.36 netmask 255.255.255.248 network 24.123.138.32 broadcast 24.123.138.39 # Internal network auto br0 iface br0 inet static address 192.168.1.1 netmask 255.255.255.0 network 192.168.1.0 broadcast 192.168.1.255 bridge_ports eth0 bridge_stp off This is the libvirt/qemu configuration file for the VM: <domain type='kvm'> <name>apps</name> <uuid>636b6620-0949-bc88-3197-37153b88772e</uuid> <memory>393216</memory> <currentMemory>393216</currentMemory> <vcpu>1</vcpu> <os> <type arch='i686' machine='pc'>hvm</type> <boot dev='hd'/> </os> <features> <acpi/> <apic/> <pae/> </features> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/bin/kvm</emulator> <disk type='file' device='cdrom'> <target dev='hdc' bus='ide'/> <readonly/> </disk> <disk type='file' device='disk'> <source file='/raid/kvm-images/apps.qcow2'/> <target dev='vda' bus='virtio'/> </disk> <interface type='bridge'> <mac address='54:52:00:27:5e:02'/> <source bridge='br0'/> <model type='virtio'/> </interface> <interface type='bridge'> <mac address='54:52:00:40:cc:7f'/> <source bridge='br1'/> <model type='virtio'/> </interface> <serial type='pty'> <target port='0'/> </serial> <console type='pty'> <target port='0'/> </console> <input type='mouse' bus='ps2'/> <graphics type='vnc' port='-1' autoport='yes' keymap='en-us'/> </devices> </domain> Along with the rest of my firewall rules, the firewalling script includes this command to pass packets destined for a KVM guest: # Allow bridged packets to pass (for KVM guests). iptables -A FORWARD -m physdev --physdev-is-bridged -j ACCEPT (Not applicable to this question, but a side-effect of my bridging configuration appears to be that I can't ever shut down cleanly. The kernel eventually tells me "unregister_netdevice: waiting for br1 to become free" and I have to hard reset the system. Maybe a sign I've done something dumb?)

    Read the article

  • ssl between balancer members?

    - by jemminger
    I have apache running on one machine as a load balancer: <VirtualHost *:443> ServerName ssl.example.com DocumentRoot /home/example/public SSLEngine on SSLCertificateFile /etc/pki/tls/certs/example.crt SSLCertificateKeyFile /etc/pki/tls/private/example.key <Proxy balancer://myappcluster> BalancerMember http://app1.example.com:12345 route=app1 BalancerMember http://app2.example.com:12345 route=app2 </Proxy> ProxyPass / balancer://myappcluster/ stickysession=_myapp_session ProxyPassReverse / balancer://myappcluster/ </VirtualHost> Note that the balancer takes requests under SSL port 443, but then communicates to the balancer members on a non-ssl port. Is it possible to have the forwarding to the balancer members be under SSL too? If so, is this the best/recommended way? If so, do I have to have another SSL cert for each balancer member? Does the SSLProxyEngine directive have anything to do with this?

    Read the article

  • How to deal with a 'public' work environment?

    - by Craige
    In the last 6 months, I have changed desks at my office 4 times. I don't mind, as it's due to expansion of the company and acquiring new office space and getting everybody settled. However, I truly miss the semi-private office I sat in 2 desks ago. I am now sitting in a large room with a number of other people. My problem with this isn't with my co-workers; everybody here is great. My problem is that based on the configuration of the room, no matter which desk I sit in, my monitors WILL be facing an open window. This causes a glare on my monitors, and it drives me crazy. I prefer a dark IDE theme as I find it easier on the eyes, however this just makes the glare that much worse. How should programmers cope with public office settings? Secondly, how should I cope with my specific problem? Should I give in and adopt a light IDE theme that will reduce the visibility of the glare but increase eye strain, or should I stick to my guns and find another solution?

    Read the article

  • Windows Server firewall asking for advice

    - by George2
    Hello everyone, I have Windows Server 2003/2008 machine, and I deployed some application on this machine. I want to put this machine in a sandbox environment, which means I want this machine to be able to access only proxy/gateway, its private used SQL Server database server, and I want to avoid network access from this machine to other machines in lab server room. Any easy solutions? BTW: my current environment is, I have a server which runs some Beta software in a Lab server room. It connects internet through proxy/gateway. Since the software is Beta, I want to reduce the risk of being hacked from internet and controlled by hacking sofwtare to attack my other servers in the same Lab server. thanks in advance, George

    Read the article

  • SSH keys fail for one user

    - by Eli
    I just set up a new Debian server. I disabled root SSH and password auth, so you've gotta use a key file. For my primary user, everything works exactly as expected. I used ssh-keygen -t dsa and got myself a public and private key. Put one in authorized keys, put the other in a pem file locally. I wanted to create a user that I can deploy things with, so I did basically the same process. I addusered it, made a .ssh folder, ran ssh-keygen -t dsa (I also tried RSA), put the keys in their appropriate locations. No luck. I'm getting a Permission denied (publickey) error. When I use the exact same keys as the account that works, same error. When I enable password authentication, I can log in via SSH with the password. How do I debug this?

    Read the article

< Previous Page | 592 593 594 595 596 597 598 599 600 601 602 603  | Next Page >