Search Results

Search found 22139 results on 886 pages for 'security testing'.

Page 336/886 | < Previous Page | 332 333 334 335 336 337 338 339 340 341 342 343  | Next Page >

  • What php programmer should know?

    - by emchinee
    I've dig the database here and didn't found any answer for my question. What is a standard for a php programmer to know? I mean, literally, what group of language functions, mechanisms, variables should person know to consider oneself a (good) php programmer? (I know 'being good' is beyond language syntax, still I'm considering syntax of plain php only) To give an example what I mean: functions to control http sessions, cookies functions to control connection with databases functions to control file handling functions to control xml etc.. I omit phrases like 'security' or 'patterns' or 'framework' intentionally as it applies to every programming language. Hope I made myself clear, any input appreciated :) Note: Michael J.V. is right claiming that databases are independent from language, so to put my question more precisely and emphasise differences: Practises or security, are some ideas to implement (there is no 'Pattern' object with 'Decorator()' method, is there?) while using databases means knowing a mysqli and a set of its methods.

    Read the article

  • Is there a better approach in migrating SIT SVN to UAT SVN?

    - by huahsin68
    In web development, given a same piece of source code, and being deploy to SIT (system integration testing) SVN/WAS and UAT (user acceptance testing) SVN/WAS. Please take note that I am using Jenkins to build everything. I have already ensured the transition from SIT SVN to UAT SVN are sync by doing manual diff on the 2 directory. Usually I will ensure the SIT WAS is working fine then only deploy to UAT WAS. But now there is a problem show up in UAT WAS and it is working fine in SIT WAS. I am suspecting there is a migration fault happened between SIT SVN to UAT SVN. In such a given scenario, is there a better approach to handle this problem?

    Read the article

  • Oracle Key Vault Sneak Peek at NYOUG

    - by Troy Kitch
    The New York Oracle Users Group will get a sneak peek of Oracle Key Vault on Tuesday, June 3, by Todd Bottger, Senior Principal Product Manager, Oracle. If you recall, Oracle Key Vault made its first appearance at last year's Oracle OpenWorld in San Francisco within the session "Introducing Oracle Key Vault: Enterprise Database Encryption Key Management." You can catch Todd's talk from 9:30 to 10:30 am. Session Abstract With many global regulations calling for data encryption, centralized and secure key management has become a need for most organizations. This session introduces Oracle Key Vault for centrally managing encryption keys, wallets, and passwords for databases and other enterprise servers. Oracle Key Vault enables large-scale deployments of Oracle Advanced Security’s Transparent Data Encryption feature and secure sharing of keys between Oracle Real Application Clusters (Oracle RAC), Oracle Active Data Guard, and Oracle GoldenGate deployments. With support for industry standards such as OASIS KMIP and PKCS #11, Oracle Key Vault can centrally manage keys and passwords for other endpoints in your organization and provide greater reliability, availability, and security

    Read the article

  • What label of tests are BizUnit tests?

    - by charlie.mott
    BizUnit is defined as a "Framework for Automated Testing of Distributed Systems.  However, I've never seen a catchy label to describe what sort of tests we create using this framework. They are not really “Unit Tests” that's for sure. "Integration Tests" might be a good definition, but I want a label that clearly separates it from the manual "System Integration Testing" phase of a project where real instances of the integrated systems are used. Among some colleagues, we brainstormed some suggestions: Automated Integration Tests Stubbed Integration Tests Sandbox Integration Tests Localised Integration Tests All give a good view of the sorts of tests that are being done. I think "Stubbed Integration Tests" is most catchy and descriptive. So I will use that until someone comes up with a better idea.

    Read the article

  • Updated Agenda for OTN Architect Day Los Angeles (Oct 25)

    - by Bob Rhubart
    Here's the latest information on the session schedule and content for Oracle Technology Network Architect Day in Los Angeles on October 25, 2012. Registration is open, but seating is limited. When: Thursday October 25 12, 2012 8:30am – 5:00pm Where: Sofitel Los Angeles 8555 Beverly Boulevard Los Angeles, CA 90048 Agenda Time Session Title Room 8:30 am - 9:00 am Registration and Continental Breakfast 9:00 am - 9:15 am Welcome and Opening Comments | Bob Rhubart Beverly Ballroom 9:15 am - 10:00 am Engineered Systems: Oracle's Vision for the Future | Ralf Dossmann Oracle's Exadata and Exalogic are impressive products in their own right. But working in combination they deliver unparalleled transaction processing performance with up to a 30x increase over existing legacy systems, with the lowest cost of ownership over a 3 or 5 year basis than any other hardware. In this session you'll learn how to leverage Oracle's Engineered Systems within your enterprise to deliver record-breaking performance at the lowest TCO. Beverly Ballroom 10:00 am - 10:30 am Monitoring and Managing Applications in the Cloud | Basheer Khan Oracle offers a broad portfolio of software and hardware products and services to enable public, private and hybrid clouds to power the enterprise. However, enterprise cloud computing presents new management challenges, that need to be addressed to realize the economic benefits of cloud computing. In this session you will learn about the methods and tools you can use to proactively monitor your end-to-end Oracle Applications environment in the cloud, define service-level objectives, gain insight into your end users, and troubleshoot performance problems from a single console. Beverly Ballroom 10:30 am - 10:45 am Break 10:45 am - 11:30 am Breakout Sessions (pick one) Cloud Computing - Making IT Simple | Dr. James Baty The road to Cloud Computing is not without a few bumps. This session will help to smooth out your journey by tackling some of the potential complications. We'll examine whether standardization is a prerequisite for the Cloud. We'll look at why refactoring isn't just for application code. We'll check out deployable entities and their simplification via higher levels of abstraction. And we'll close out the session with a look at engineered systems and modular clouds. Beverly Ballroom Innovations in Grid Computing with Oracle Coherence | Ashok Aletty Learn how Oracle Coherence can increase the availability, scalability and performance of your existing applications with its advanced low-latency data-grid technologies. Also hear some interesting industry-specific use cases that customers had implemented and how Oracle is integrating Coherence into its Enterprise Java stack. Hollywood Room 11:30 am - 12:15 pm Breakout Sessions (pick one) Enterprise Strategy for Cloud Security | Dave Chappelle Security is high on the list of concerns for many organizations as they evaluate their cloud computing options. This session will examine security in the context of the various forms of cloud computing. We'll consider technical and non-technical aspects of security, and discuss several strategies for cloud computing, from both the consumer and producer perspectives. Beverly Ballroom Oracle Enterprise Manager | Perren Walker This session examines new Oracle Enterprise Manager monitoring, administration, and management features for Oracle Exalogic. It focuses on two management themes: cloud management related to virtualization and applications-to-disk management. For private cloud management, it discusses virtualization management features providing an enhanced set of application deployment capabilities enabling IaaS as well as PaaS interactions. Then from an end-to-end perspective, it covers the specific capabilities and—where applicable—best practices for machine, cloud, middleware, and application administration. Hollywood Room 12:15 pm - 1:15 pm Lunch Beverly Ballroom Lounge 1:15 pm - 2:00 pm Panel Discussion - Q&A with session speakers Beverly Ballroom 2:00 pm - 2:45 pm Breakout Sessions (pick one) Oracle Cloud Reference Architecture | Anbu Krishnaswamy Cloud initiatives are beginning to dominate enterprise IT roadmaps. Successful adoption of Cloud and the subsequent governance challenges warrant a Cloud reference architecture that is applied consistently across the enterprise. This presentation will answer the important questions: What exactly is a Cloud, why you need it, what changes it will bring to the enterprise, and what are the key capabilities of a Cloud infrastructure are - using Oracle's Cloud Reference Architecture, which is part of the IT Strategies from Oracle (ITSO) Cloud Enterprise Technology Strategy ETS). Beverly Ballroom 21st Century SOA | Jeff Davies Service Oriented Architecture has evolved from concept to reality in the last decade. The right methodology coupled with mature SOA technologies has helped customers demonstrate success in both innovation and ROI. In this session you will learn how Oracle SOA Suite's orchestration, virtualization, and governance capabilities provide the infrastructure to run mission critical business and system applications. We'll also take a special look at the convergence of SOA & BPM using Oracle's Unified technology stack. Hollywood Room 2:45 pm - 3:00 pm Break 3:00 pm - 4:00 pm Roundtable Discussion Beverly Ballroom 4:00 pm - 4:15 pm Closing Comments & Readouts from Roundtables Beverly Ballroom 4:15 pm - 5:00 pm Networking / Reception Beverly Ballroom Lounge Note: Session schedule and content subject to change.

    Read the article

  • Failed 12.04 installation

    - by Rob Sayer
    I tried installing Ubuntu 12.04 today. Not an upgrade, a new installation. It didn't work. My computer specs: Computer: Compaq presario CQ-104CA OS: Windows 7 Home 64 bit CPU: AMD V140 BIOS: latest Graphics: amd m880g with ati mobility radeon hd 4250 Wireless: atheros ar9285 Internal HD:SATA I wasn't connected to the internet at the time ... I know of a number of people who have installed ubuntu unconnected and just updated later. It seemed to go normally until I got to the part where I chose to install dual boot linux/windows. Then, the screen went black and the following test appeared (I left out the [OK]'s): checking battery state starting crash report submission daemon stating cpu interrupts balancing daemon stopping system V runlevel compatibility starting configure network device security stopping configure network device security stopping cold plug devices stopping log initial device creation starting enable remaining boot-time encrypting devices starting configure network device security starting save udev log and update rules stopping save udev log and update rules stopping enable remaining boot-time encrypted block devices checking for running unattended-upgrades acpid: exiting speech-dispatcher disabled: edit /etc/default/speech-disorder At this point, the CD is ejected. Then nothing. If I press the return key, it boots Windows. I don't think that's what's supposed to happen. Thinking the cd media or dvd drive may have been faulty, I downloaded the .iso again and made a bootable USB stick, as per your instructions. This time there was no cryptic crash screen. It just booted windows. I can't find any log files it may have left. Thinking the amd64 version may have been the wrong one, I tried downloading the x86 version. Same thing, both from cd and usb drive. Note I downloaded both files twice. I doubt it was a corrupted d/l. This is supposed to be a simple, transparent install. I went to the time and trouble of looking up my devices and drivers re ubuntu beforehand, and was prepared to do some configuration, though I know someone who has the same wireless device and his worked righted out of the box. But I spent over 3 hours trying to install it with only the above to show for it.

    Read the article

  • Two Sun Certification Exams To Retire August 1, 2010

    - by Paul Sorensen
    Effective August 1, 2010, Exam CX-310-400 ("Sun Certified Integrator for Identity Manager 7.1"), currently part of the "Sun Certified Integrator for Identity Manager 7.1" certification track, will be retired. We will also retire Exam CX-310-502 ("Sun Certified Java CAPS Integrator"), currently within the "Sun Certified Java CAPS Integrator" certification track. Both exams will remain available for registration and testing at Prometric Testing Centers through July 31, 2010.CREDENTIAL VALIDITYPlease note that that these credentials remain valid indefinitely for those holding the certifications. These retirements therefore have no effect on those who complete the certification requirements before August 1, 2010.QUICK LINKSRetiring Exams:Exam CX-310-400 "Sun Certified Integrator for Identity Manager 7.1"Exam CX-310-502 "Sun Certified Java CAPS Integrator" Certification Tracks:Sun Certified Integrator for Identity Manager 7.1Sun Certified Java CAPS IntegratorLearn more: Oracle Certification Retirements

    Read the article

  • ArchBeat Link-o-Rama for 2012-10-11

    - by Bob Rhubart
    Whiteboards, not red carpets. OTN Architect Day Los Angeles. Oct 25. Free event. Yes, it's TinselTown, but the stars at this event are experts in the use of Oracle technologies in today's architectures. This free event includes a full slate of technical sessions and peer interaction covering cloud computing, SOA, and engineered systems—and lunch is on us. Register now. Thursday October 25, 2012, 8:00 a.m. – 5:00 p.m. Sofitel Los Angeles, 8555 Beverly Boulevard, Los Angeles, CA 90048 JDeveloper extensions where? | Peter Paul van de Beek "Where does the downloaded stuff go after you installed JDeveloper extensions, like SOA Composite Editor, Oracle BPM Studio, or AIA Service Constructor?" Peter Paul van de Beek has the answer. Using Apache Derby Database with WebLogic (the express way) | Frank Munz Another technical how-to video from Dr. Frank Munz. Compensation Hello World | Ronald van Luttikhuizen Oracle ACE Director Ronald van Luttikhuizen's post addresses several question that came up during the "Effective Fault Handling in SOA Suite 11g" session that he and fellow Oracle ACE Guido Schmutz presented at Oracle OpenWorld. Oracle Fusion Middleware Security: OAM and OIM 11g Academies Looking for technical how-to content covering Oracle Access Manager and Oracle Identity Manager? The people behind the Oracle Middleware Security blog have indexed relevant blog posts into what they call "Academies." "These indexes," the blog explains, "contain the articles we've written that we believe provide long lasting guidance on OAM and OIM. Posts covered in these series include articles on key aspects of OAM and OIM 11g, best practice architectural guidance, integrations, and customizations." Maximum Availability Whitepaper for IDM 11gR2 | Oracle Fusion Middleware Security The Oracle Fusion Middelware A-Team shares an overview of and a link to a new white paper: "Identity Management 11.1.2 Enterprise Deployment Blueprint." Thought for the Day "The trouble with the world is that the stupid are sure and the intelligent are full of doubt." — Bertrand Russell (May 18, 1872 – February 2, 1970) Source: SoftwareQuotes.com

    Read the article

  • Unable to run java file from command like Ubuntu

    - by KodeSeeker
    I'm a newbie to Ubuntu and Im looking to run Java code from the command line. Ive checked that path as well. The interesting thing is the code compiles but fails to run ie. user@ubuntu:~/py-scripts$ javac Main.java' works well. but when I do . `user@ubuntu:~/py-scripts$ java Main I get the following error Exception in thread "main" java.lang.UnsupportedClassVersionError: Main : Unsupported major.minor version 51.0 at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClass(ClassLoader.java:634) at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) at java.net.URLClassLoader.defineClass(URLClassLoader.java:277) at java.net.URLClassLoader.access$000(URLClassLoader.java:73) at java.net.URLClassLoader$1.run(URLClassLoader.java:212) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:205) at java.lang.ClassLoader.loadClass(ClassLoader.java:321) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:294) at java.lang.ClassLoader.loadClass(ClassLoader.java:266) Could not find the main class: Main. Program will exit.'

    Read the article

  • Oracle is Sponsoring LinuxCon Europe 2012

    - by Zeynep Koch
    Architecture is amazing in Barcelona but you will also be impressed with Oracle Linux sessions in LinuxCon Europe as well.  Oracle is one of the key sponsors in LinuxCon Europe and we have great sessions to show you why Oracle Linux is best for your "IT architecture"! We also have a booth where you can pick up latest Oracle Linux and Oracle VM DVD Kit and Virtualization for Dummies booklet. Don't forget to visit us at technology showcase Booth #19. Oracle Sessions at LinuxCon Europe 2012:  1. OCFS2: Status and Overview - Lenz Grimmer, Oracle Wednesday November 7, 2012 10:40am - 11:25am Venue: Diamant OCFS2, Oracle's general-purpose shared-disk cluster file system for Linux has come a long way since its development started in 2003. Distributed under the GPL and part of the mainline Linux Kernel, it is also included in Oracle Linux and plays a vital role in products like Oracle VM, Oracle RAC or E-Business Suite. This presentation will provide a general technical overview as well as an update on the latest developments. Attendees will learn about the features and improvements that set OCFS2 apart from other Linux-based cluster file systems, including: Heartbeat implementation: global vs. local heartbeats Storage optimizations: Extent-based Allocations, Hole punching, Reflinks 2. Status of Linux Tracing - Elena Zannoni, Oracle Wednesday November 7, 2012 11:35am - 12:20am Venue: Diamant There have been many developments recently in the Linux tracing area. The tracing infrastructure in the kernel is getting more robust, with  the recent introduction of uprobes to allow the implementation of user  space tracing, and new features of perf. There are many tracing tools to choose from, including the newest kid on the block, DTrace for Linux.  This talk will take the audience through the main tracing facilities  available today whether more tightly integrated with the kernel code, or maintained stand alone. 3. MySQL Security Model and Pluggable Authentication - Kristofer Pettersson, Oracle Wednesday November 7, 2012 1:50pm - 2:35pm Venue: Diamant With an increasing security awareness among web and cloud developers, knowing how to secure your database from unauthorized or malicious access has become important. This talk explains the MySQL security model, pluggable authentication, new auditing features and rounds off with some pointers on how to securely integrate your database into your Linux web stack. We look forward to seeing you in Barcelona, Spain on November 5-9, 2012. Register today 

    Read the article

  • Deploying a very simple application

    - by vanna
    I have a very simple working console application written in C++ linked with a light static library. It is just for testing purposes. Now that the coding part is done, I would like to know the process of actually deploying the program. I wrote a very basic CMakeLists.txt that create makefiles or VS projects to build the sources. I also have a program that calls the static library in order to make some google tests. To me, the distribution of this application goes like this : to developpers : the src directory with the CMakeLists.txt file (multi-platform distribution) with a README.txt and an INSTALL.txt to users : the executable and a README.txt git repo : everything mentionned above plus the sources for testing and the gtest external lib A this point : considering the complexity of my application, am I doing it right ? Is there any reference that would formalize this deployment process so I can get better and go further ? Say I would like to add dynamic libraries that can be updated, external libraries like boost : how should I package this to deploy it in a professionnal way ?

    Read the article

  • Code Behaviour via Unit Tests

    - by Dewald Galjaard
    Normal 0 false false false EN-ZA X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Some four months ago my car started acting up. Symptoms included a sputtering as my car’s computer switched between gears intermittently. Imagine building up speed, then when you reach 80km/h the car magically and mysteriously decide to switch back to third or even second gear. Clearly it was confused! I managed to track down a technician, an expert in his field to help me out. As he fitted his handheld computer to some hidden port under the dash, he started to explain “These cars are quite intelligent, you know. When they sense something is wrong they run in a restrictive program which probably account for how you managed to drive here in the first place...”  I was surprised and thought this was certainly going to be an interesting test drive. The car ran smoothly down the first couple of stretches as the technician ran through routine checks. Then he said “Ok, all looking good. We need to start testing aspects of the gearbox. Inside the gearbox there are a couple of sensors. One of them is a speed sensor which talks to the computer, which in turn will decide which gear to switch to. The restrictive program avoid these sensors altogether and allow the computer to obtain its input from other [non-affected] sources”. Then, as soon as he forced the speed sensor to come back online the symptoms and ill behaviour re-emerged... What an incredible analogy for getting into a discussion on unit testing software? Besides I should probably put my ill fortune to some good use, right? This example provide a lot of insight into how and why we should conduct unit tests when writing code. More importantly, it captures what is easily and unfortunately often the most overlooked goal of writing unit tests by those new to the art and those who oppose it alike - The goal of writing unit tests is to test the behaviour of our code under predefined conditions. Although it is very possible to test the intrinsic workings of each and every component in your code, writing several tests for each method in practise will soon prove to be an exhausting and ultimately fruitless exercise given the certain and ever changing nature of business requirements. Consequently it is true and quite possible whilst conducting proper unit tests, to call any single method several times as you examine and contemplate different scenarios. Let’s write some code to demonstrate what I mean. In my example I make use of the Moq framework and NUnit to create my tests. Truly you can use whatever you’re comfortable with. First we’ll create an ISpeedSensor interface. This is to represent the speed sensor located in the gearbox.  Then we’ll create a Gearbox class which we’ll pass to a constructor when we instantiate an object of type Computer. All three are described below.   ISpeedSensor.cs namespace AutomaticVehicle {     public interface ISpeedSensor     {         int ReportCurrentSpeed();     } }   Gearbox.cs namespace AutomaticVehicle {      public class Gearbox     {         private ISpeedSensor _speedSensor;           public Gearbox( ISpeedSensor gearboxSpeedSensor )         {             _speedSensor = gearboxSpeedSensor;         }         /// <summary>         /// This method obtain it's reading from the speed sensor.         /// </summary>         /// <returns></returns>         public int ReportCurrentSpeed()         {             return _speedSensor.ReportCurrentSpeed();         }     } } Computer.cs namespace AutomaticVehicle {     public class Computer     {         private Gearbox _gearbox;         public Computer( Gearbox gearbox )         {                     }          public int GetCurrentSpeed()         {             return _gearbox.ReportCurrentSpeed( );         }     } } Since this post is about Unit testing, that is exactly what we’ll create next. Create a second project in your solution. I called mine AutomaticVehicleTests and I immediately referenced the respective nunit, moq and AutomaticVehicle dll’s. We’re going to write a test to examine what happens inside the Computer class. ComputerTests.cs namespace AutomaticVehicleTests {     [TestFixture]     public class ComputerTests     {         [Test]         public void Computer_Gearbox_SpeedSensor_DoesThrow()         {             // Mock ISpeedSensor in gearbox             Mock< ISpeedSensor > speedSensor = new Mock< ISpeedSensor >( );             speedSensor.Setup( n => n.ReportCurrentSpeed() ).Throws<Exception>();             Gearbox gearbox = new Gearbox( speedSensor.Object );               // Create Computer instance to test it's behaviour  towards an exception in gearbox             Computer carComputer = new Computer( gearbox );             // For simplicity let’s assume for now the car only travels at 60 km/h.             Assert.AreEqual( 60, carComputer.GetCurrentSpeed( ) );          }     } }   What is happening in this test? We have created a mocked object using the ISpeedsensor interface which we've passed to our Gearbox object. Notice that I created the mocked object using an interface, not the implementation. I’ll talk more about this in future posts but in short I do this to accentuate the fact that I'm not not really concerned with how SpeedSensor work internally at this particular point in time. Next I’ve gone ahead and created a scenario where I’ve declared the speed sensor in Gearbox to be faulty by forcing it to throw an exception should we ask Gearbox to report on its current speed. Sneaky, sneaky. This test is a simulation of how things may behave in the real world. Inevitability things break, whether it’s caused by mechanical failure, some logical error on your part or a fellow developer which didn’t consult the documentation (or the lack thereof ) - whether you’re calling a speed sensor, making a call to a database, calling a web service or just trying to write a file to disk. It’s a scenario I’ve created and this test is about how the code within the Computer instance will behave towards any such error as I’ve depicted. Now, if you’ve followed closely in my final assert method you would have noticed I did something quite unexpected. I might be getting ahead of myself now but I’m testing to see if the value returned is equal to what I expect it to be under perfect conditions – I’m not testing to see if an error has been thrown! Why is that? Well, in short this is TDD. Test Driven Development is about first writing your test to define the result we want, then to go back and change the implementation within your class to obtain the desired output (I need to make sure I can drive back to the repair shop. Remember? ) So let’s go ahead and run our test as is. It’s fails miserably... Good! Let’s go back to our Computer class and make a small change to the GetCurrentSpeed method.   Computer.cs public int GetCurrentSpeed() {   try   {     return _gearbox.ReportCurrentSpeed( );   }   catch   {     RunRestrictiveProgram( );   } }     This is a simple solution, I know, but it does provide a way to allow for different behaviour. You’re more than welcome to provide an implementation for RunRestrictiveProgram should you feel the need to. It's not within the scope of this post or related to the point I'm trying to make. What is important is to notice how the focus has shifted in our approach from how things can break - to how things behave when broken.   Happy coding!

    Read the article

  • Using ASP.NET Membership Provider with an ACL

    - by geekrutherford
    Up until recently one of my applications has used the membership provider within ASP.NET exclusively. However, it has been proposed that while the currently defined roles are beneficial, security needs to be more granular to restrict both access to certain pages and functionality present within a given page.   Unfortunately, the role based security ASP.NET gives you out of the box falls down in this area. This is not due to a lack of foresight by Microsoft, but rather it was simply not designed for implementing both role based security and any inherent ACL you may define within these roles. Mind you some would say an ACL is independent of the role to which a user belongs and is assigned to the user directly.   The application mentioned here has it's own User object (which encapsulates the membership provider user object as a property) and SQL Server table to store extended information not present in the aspnet_users table. While I could have modified the aspnet membership schema to suit the applications needs, it seemed smarter to simply create a separate table with a foreign key back to the aspnet_users table.   Since I have a separate object to store extended user information, I simply created an ACL object and expose it as a property of my user object.   This is all well and good, but it does not help in regards to the SiteMapProvider and restricting access at the page level based on the users ACL.   The straightforward answer would be to develop some code within the databound event for the menu that checks the page title and has hardcoded logic that dictates a user must have certain permissions turned on. The problem with this approach is that it's HARDCODED!!! If you need to change access to a page you'd need to do a build and go through your normal deployment process....ugh!!!   An alternative method, albeit not perfect, is to utilize the resourceKey property on the SiteMapNodes in the SiteMap file with the name of the required permission to view the page. Within the databound event for your menu you iterate the SiteMapNodes in the menus SiteMapProvider looking for a match at the page level based on title. When a match is detected, you have a switch/case on the SiteMapNodes resourceKey (the name of the ACL permission required). The case for the resourceKey ensures the users ACL permission is turned on and viola!!!   This is noteably not perfect in that it is using the resourceKey in a manner other than intended.  Since the application is not localized, using it in the manner described it not an issue.   Below is a sample SiteMap file with the resourceKey used as the ACL permission identifier:     Below is the ItemDataBound event. This application uses the Telerik Menu control:

    Read the article

  • 2011 - ALMs for your development team and the people they work with.

    - by David V. Corbin
    Welcome to 2011, it is already shaping up to be a very exciting year. The title of the post is not about charitable giving, although that is also a great topic. Application Lifecycle Management and the Systems that support the environment is, and 2011 will be a year where I expect many teams to invest heavily in this area. For those not familiar with ALM, it can be simplified down to "A comprehensive view of all of the iteas, requirements, activities and artifacts that impact an application over the course of its lifecycle, from concept until decommissioning". Obviously, this encompases a large number of different areas even for relatively small and medium sized projects. In recent years, many teams have adapted methodoligies which address individual aspects of this; but the majority of this adoption has resulted in "islands of improvement" rather than the desired comprehensive outcome...Until now! Last year Microsoft released Team Foundation Server 2010 along with Visual Studio 2010 Ultimate Edition, and with these two in combination the situation has drastically changed. At last there is a single environment that is capable of handling all aspects of ALM, and is also capable of dealing with migration and integration with existing systems to make the transition to a single solution much easier. Thse possibilities (and practicalities) are nothing short of amazing, Architecture thru Testing integration? YES. Being able to correlate specific requirement items (and their history) to actual code (and code history)? YES. Identification of which tests will be potentially impacted by a given code change? YES. Resiliant Automated Testing of User Interfaces? YES. Automatic Deployment Management? YES. Integraton Level testing as part of (designated) Builds? YES. I could easily double or triple the above list, but these items should be enough to get you thinking about the "pain points" your team and organization currently face and the fact that there IS a way to relieve the pain. Over the course of the year, I am hoping to bring together some of the "best of breed" information, along with hosting (and participating in) discussions with various experts in the field. There are already a number of groups (including many on LinkedIn) that have an ALM focus, and I encourage everyone out to check them out. I will be posting a list of the ones I find most helpful in the not too distant future. As I said at the beginning, 2011 is shaping up to be a very interesting (and productive) year. Why wait to start investigating and adopting ALM? ps: For those interested in becoming an "Alms Giver" in the charitable sense, I highly recommend checking out GiveCamp. A group of developers, designers and others get together to create a solution for a charity in just under 48 hours. I will be attending the GiveCamp in New York City on Jan 14-16, more information is available at nycgivecamp.org/

    Read the article

  • What's the best version control/QA workflow for a legacy system?

    - by John Cromartie
    I am struggling to find a good balance with our development and testing process. We use Git right now, and I am convinced that ReinH's Git Workflow For Agile Teams is not just great for capital-A Agile, but for pretty much any team on DVCS. That's what I've tried to implement but it's just not catching. We have a large legacy system with a complex environment, hundreds of outstanding and undiscovered defects, and no real good way to set up a test environment with realistic data. It's also hard to release updates without disrupting users. Most of all, it's hard to do thorough QA with this process... and we need thorough testing with this legacy system. I feel like we can't really pull off anything as slick as the Git workflow outlined in the link. What's the way to do it?

    Read the article

  • SO-Aware @ TechReady (Microsoft Event)

    - by SURESH GIRIRAJAN
    A session on SO-Aware is presented at Microsoft TechReady event this week check here for more details : http://tellagostudios.com/blog/so-aware-highlighted-microsoft-techready Check here for more details on SO-Aware and how to leverage within your enterprise if you’re using BizTalk Server, WCF Services and services build on Azure. It provides lot of capability such as: o    Centralized service repository o    Centralized configuration management o    Service testing o    Monitoring o    Transparent integration with technologies such as Visual Studio, BizTalk Server, Windows Server & Azure AppFabric among many others o    SO-Aware Test Workbench provides developers with a visually rich environment to model and control the execution of load and functional tests in a SOA infrastructure. This tool includes the first native WCF load testing engine allowing developers to transparently load test applications built on Microsoft's service oriented technologies such as WCF, BizTalk Server or the Windows Server or Azure AppFabric.

    Read the article

  • CAMeditor v1.9 &ndash; thoughts and reflections

    - by david.webber(at)oracle.com
    We recently published the latest iteration of the CAMeditor tool on Sourceforge.net including more enhancements to the NIEM capabilities. This release represented an incremental improvement over the prior version with mostly bug fixes and patches. We’re now working on the full v2.0 release which will feature substantial improvements and new features in practically all areas.  Most importantly we are improving the dictionary handling and providing the ability to visually design new exchange schema directly from dictionary sets of components. In addition we are doing some interim release work on 1.9.x with patches and enhancements particularly to support running on Ubuntu and non-Windows platforms. And we are also providing an Ant script based deployment for the CAMV validation engine so you can do unit testing of batches of templates and XML instance samples using command line scripts. More updates will be forthcoming as we make early release versions available for testing purposes.

    Read the article

  • Digital Due Process

    Coalition urges updates to Electronic Communications Privacy Act (ECPA) to reflect web 2.0 world Electronic Communications Privacy Act - Privacy - Security - Google - ECPA

    Read the article

  • JPRT: A Build & Test System

    - by kto
    DRAFT A while back I did a little blogging on a system called JPRT, the hardware used and a summary on my java.net weblog. This is an update on the JPRT system. JPRT ("JDK Putback Reliablity Testing", but ignore what the letters stand for, I change what they mean every day, just to annoy people :\^) is a build and test system for the JDK, or any source base that has been configured for JPRT. As I mentioned in the above blog, JPRT is a major modification to a system called PRT that the HotSpot VM development team has been using for many years, very successfully I might add. Keeping the source base always buildable and reliable is the first step in the 12 steps of dealing with your product quality... or was the 12 steps from Alcoholics Anonymous... oh well, anyway, it's the first of many steps. ;\^) Internally when we make changes to any part of the JDK, there are certain procedures we are required to perform prior to any putback or commit of the changes. The procedures often vary from team to team, depending on many factors, such as whether native code is changed, or if the change could impact other areas of the JDK. But a common requirement is a verification that the source base with the changes (and merged with the very latest source base) will build on many of not all 8 platforms, and a full 'from scratch' build, not an incremental build, which can hide full build problems. The testing needed varies, depending on what has been changed. Anyone that was worked on a project where multiple engineers or groups are submitting changes to a shared source base knows how disruptive a 'bad commit' can be on everyone. How many times have you heard: "So And So made a bunch of changes and now I can't build!". But multiply the number of platforms by 8, and make all the platforms old and antiquated OS versions with bizarre system setup requirements and you have a pretty complicated situation (see http://download.java.net/jdk6/docs/build/README-builds.html). We don't tolerate bad commits, but our enforcement is somewhat lacking, usually it's an 'after the fact' correction. Luckily the Source Code Management system we use (another antique called TeamWare) allows for a tree of repositories and 'bad commits' are usually isolated to a small team. Punishment to date has been pretty drastic, the Queen of Hearts in 'Alice in Wonderland' said 'Off With Their Heads', well trust me, you don't want to be the engineer doing a 'bad commit' to the JDK. With JPRT, hopefully this will become a thing of the past, not that we have had many 'bad commits' to the master source base, in general the teams doing the integrations know how important their jobs are and they rarely make 'bad commits'. So for these JDK integrators, maybe what JPRT does is keep them from chewing their finger nails at night. ;\^) Over the years each of the teams have accumulated sets of machines they use for building, or they use some of the shared machines available to all of us. But the hunt for build machines is just part of the job, or has been. And although the issues with consistency of the build machines hasn't been a horrible problem, often you never know if the Solaris build machine you are using has all the right patches, or if the Linux machine has the right service pack, or if the Windows machine has it's latest updates. Hopefully the JPRT system can solve this problem. When we ship the binary JDK bits, it is SO very important that the build machines are correct, and we know how difficult it is to get them setup. Sure, if you need to debug a JDK problem that only shows up on Windows XP or Solaris 9, you'll still need to hunt down a machine, but not as a regular everyday occurance. I'm a big fan of a regular nightly build and test system, constantly verifying that a source base builds and tests out. There are many examples of automated build/tests, some that trigger on any change to the source base, some that just run every night. Some provide a protection gateway to the 'golden' source base which only gets changes that the nightly process has verified are good. The JPRT (and PRT) system is meant to guard the source base before anything is sent to it, guarding all source bases from the evil developer, well maybe 'evil' isn't the right word, I haven't met many 'evil' developers, more like 'error prone' developers. ;\^) Humm, come to think about it, I may be one from time to time. :\^{ But the point is that by spreading the build up over a set of machines, and getting the turnaround down to under an hour, it becomes realistic to completely build on all platforms and test it, on every putback. We have the technology, we can build and rebuild and rebuild, and it will be better than it was before, ha ha... Anybody remember the Six Million Dollar Man? Man, I gotta get out more often.. Anyway, now the nightly build and test can become a 'fetch the latest JPRT build bits' and start extensive testing (the testing not done by JPRT, or the platforms not tested by JPRT). Is it Open Source? No, not yet. Would you like to be? Let me know. Or is it more important that you have the ability to use such a system for JDK changes? So enough blabbering on about this JPRT system, tell me what you think. And let me know if you want to hear more about it or not. Stay tuned for the next episode, same Bloody Bat time, same Bloody Bat channel. ;\^) -kto

    Read the article

  • How to convince my boss to improve code quality?

    - by Vimvq1987
    The place I'm working for is a service provider. We have a lot of services, which are written to deal with deadline, so their code are really terrible: No coding convention, everyone codes in his own style No unit testing (which is really bad) No refactoring (which is truly worse) No automation build/deployment etc and these code are used again and again, so bad code continue to spread all over my department. I really want to set up a standard quality for our code, by requiring everyone to follow "rules": every line of code which does not follow convention will be rejected, and every function of code which does not pass unit testing will not be committed,...But I don't know how to convince my boss to allow me to do this. I'm relatively new comer, so inspiring people from my works is really hard, and I think it's easier if my boss support me to this. Thank you very much for your advices

    Read the article

  • "Malformed line 6" error in my /etc/apt/sources.list

    - by Odi1215
    I'm new to Ubuntu so I don't really know much yet. I encountered this problem while on the terminal: E: Malformed line 6 in source list /etc/apt/sources.list (dist parse) E: The list of sources could not be read. What should I do? Help would be much appreciated. Here's my source.list: # /etc/apt/sources.list deb http://archive.ubuntu.com/ubuntu/ precise main restricted universe multiverse deb http://security.ubuntu.com/ubuntu/ precise-security main restricted universe multiverse deb http://archive.ubuntu.com/ubuntu/ precise-updates main restricted universe multiverse deb http://archive.canonical.com/ partner deb-src http://archive.canonical.com/ partner /etc/apt/sources.list

    Read the article

  • /etc/postfix/transport missing; what should it look like?

    - by Thufir
    I'm following the mailman guide but couldn't locate /etc/postfix/ so created it as follows: root@dur:~# root@dur:~# cat /etc/postfix/transport dur.bounceme.net mailman: root@dur:~# root@dur:~# telnet localhost 25 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. 220 dur.bounceme.net ESMTP Postfix (Ubuntu) ehlo fqdn_test 250-dur.bounceme.net 250-PIPELINING 250-SIZE 10240000 250-VRFY 250-ETRN 250-STARTTLS 250-ENHANCEDSTATUSCODES 250-8BITMIME 250 DSN mail from:[email protected] 250 2.1.0 Ok rcpt to:thufir@localhost 451 4.3.0 <thufir@localhost>: Temporary lookup failure rcpt to:[email protected] 451 4.3.0 <[email protected]>: Temporary lookup failure quit 221 2.0.0 Bye Connection closed by foreign host. root@dur:~# root@dur:~# postconf -n alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases, hash:/var/lib/mailman/data/aliases append_dot_mydomain = no biff = no broken_sasl_auth_clients = yes config_directory = /etc/postfix default_transport = smtp home_mailbox = Maildir/ inet_interfaces = loopback-only mailbox_command = /usr/lib/dovecot/deliver -c /etc/dovecot/conf.d/01-mail-stack-delivery.conf -m "${EXTENSION}" mailbox_size_limit = 0 mailman_destination_recipient_limit = 1 mydestination = dur, dur.bounceme.net, localhost.bounceme.net, localhost myhostname = dur.bounceme.net mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 readme_directory = no recipient_delimiter = + relay_domains = lists.dur.bounceme.net relay_transport = relay relayhost = smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtp_use_tls = yes smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_recipient_restrictions = reject_unknown_sender_domain, reject_unknown_recipient_domain, reject_unauth_pipelining, permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination smtpd_sasl_auth_enable = yes smtpd_sasl_authenticated_header = yes smtpd_sasl_local_domain = $myhostname smtpd_sasl_path = private/dovecot-auth smtpd_sasl_security_options = noanonymous smtpd_sasl_type = dovecot smtpd_tls_auth_only = yes smtpd_tls_cert_file = /etc/ssl/certs/ssl-mail.pem smtpd_tls_key_file = /etc/ssl/private/ssl-mail.key smtpd_tls_mandatory_ciphers = medium smtpd_tls_mandatory_protocols = SSLv3, TLSv1 smtpd_tls_received_header = yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes tls_random_source = dev:/dev/urandom transport_maps = hash:/etc/postfix/transport root@dur:~# root@dur:~# tail /var/log/mail.log Aug 28 02:05:15 dur postfix/smtpd[20326]: connect from localhost[127.0.0.1] Aug 28 02:06:10 dur postfix/smtpd[20326]: warning: hash:/var/lib/mailman/data/aliases is unavailable. open database /var/lib/mailman/data/aliases.db: No such file or directory Aug 28 02:06:10 dur postfix/smtpd[20326]: warning: hash:/var/lib/mailman/data/aliases lookup error for "thufir@localhost" Aug 28 02:06:10 dur postfix/smtpd[20326]: NOQUEUE: reject: RCPT from localhost[127.0.0.1]: 451 4.3.0 <thufir@localhost>: Temporary lookup failure; from=<[email protected]> to=<thufir@localhost> proto=ESMTP helo=<fqdn_test> Aug 28 02:06:23 dur postfix/smtpd[20326]: warning: hash:/var/lib/mailman/data/aliases is unavailable. open database /var/lib/mailman/data/aliases.db: No such file or directory Aug 28 02:06:23 dur postfix/smtpd[20326]: warning: hash:/var/lib/mailman/data/aliases lookup error for "[email protected]" Aug 28 02:06:23 dur postfix/smtpd[20326]: NOQUEUE: reject: RCPT from localhost[127.0.0.1]: 451 4.3.0 <[email protected]>: Temporary lookup failure; from=<[email protected]> to=<[email protected]> proto=ESMTP helo=<fqdn_test> Aug 28 02:06:28 dur postfix/smtpd[20326]: disconnect from localhost[127.0.0.1] Aug 28 02:06:49 dur dovecot: pop3-login: Login: user=<thufir>, method=PLAIN, rip=127.0.0.1, lip=127.0.0.1, mpid=20338, TLS Aug 28 02:06:49 dur dovecot: pop3(thufir): Disconnected: Logged out top=0/0, retr=0/0, del=0/0, size=0 root@dur:~# The manual page is here.

    Read the article

< Previous Page | 332 333 334 335 336 337 338 339 340 341 342 343  | Next Page >