Search Results

Search found 6887 results on 276 pages for 'internal'.

Page 157/276 | < Previous Page | 153 154 155 156 157 158 159 160 161 162 163 164  | Next Page >

  • Lubuntu 12.04 on Acer laptop boots to blank blue screen

    - by WGCman
    My previous question on this was closed, but I am posting it again as the solution which my son eventually found may assist other users of the forum, or someone may be able to tweak the solution to improve the performance. Having installed Kubuntu 12.04.01 from a live USB onto my desktop, I wanted to do the same on my laptop, an Acer Aspire 1362 Laptop, which has 256MB RAM (actually 512 "on the box", but a good deal can be borrowed by the graphics!). I found Kubuntu wouldn't run on so little memory but downloaded: Lubuntu-12.04-alternate-i386.iso, which I understood was light enough to go. The laptop has one internal 40GB Toshiba hard drive divided into 3 partitions: C,19GB with Windows XP, Windows program files and some data, D, 19GB mostly data, and a small 2GB partition with some Acer software, which XP can't normally “see”. I transferred most of the contents of D to a memory stick, leaving 16GB free for Lubuntu. I did not want to dump XP yet, though it is painfully slow. I installed Lubuntu from then USB stick, accepting the default answers to most of the questions. The D: partition was further partitioned into a 500MB boot partition, 10GB for Linux, 2GB Swap and 6GB for data shareable between Linux and Windows. I had no error messages during installation, rebooted, was offered the choice of Ubuntu or XP, and selected the former. After a few minutes, I get a dark blue screen announcing Lubuntu with five dots underneath which lighten in turn. Eventually the lights stopped, and whatever I try the screen remains blank apart from “Lubuntu” I tried several solutions suggested on the forum for “identical” questions but without success.

    Read the article

  • SSAS Compare version 1.0 released

    - by Red Gate Software BI Tools Team
    We’re pleased to announce that SSAS Compare version 1.0 has been released as a free tool. Version 1.0 includes: Comparisons of live databases and XMLA or Analysis Services Project files MDX syntax diffs and highlighting Server comparisons Deployment wizard with summaries of scripted actions Bug fixes and engine and UI refinements We’ve tested it on as many cube configurations as we could find (not just good old AdventureWorks!), but we can’t provide support for free tools — so if you’re reliant on SSAS Compare for your cube deployment, use it at your own risk. See the user license agreement in the installer for more details. SSAS Compare’s come a long way from its humble beginnings as an internal tool first developed for Red Gate’s own BI developers. Today’s SSAS Compare is now much more stable — not to mention much easier to use — and something the team is proud to have released with Red Gate’s name on. Next: Deployment Manager We’re working on integrating SSAS Compare cube deployment with our new Deployment Manager tool, so you’ll be able to create cube deployment scripts and automate the deployment process, too.  We’re documenting the process in a white paper we’ll publish online in the next week. Thank you! Thanks to all the SSAS Compare users out there. Without your feedback, we could never have produced such a stable product so quickly. We hope you continue to find useful. See you in Deployment Manager!  

    Read the article

  • Oracle R Distribution 2-13.2 Update Available

    - by Sherry LaMonica
    Oracle has released an update to the Oracle R Distribution, an Oracle-supported distribution of open source R. Oracle R Distribution 2-13.2 now contains the ability to dynamically link the following libraries on both Windows and Linux: The Intel Math Kernel Library (MKL) on Intel chips The AMD Core Math Library (ACML) on AMD chips To take advantage of the performance enhancements provided by Intel MKL or AMD ACML in Oracle R Distribution, simply add the MKL or ACML shared library directory to the LD_LIBRARY_PATH system environment variable. This automatically enables MKL or ACML to make use of all available processors, vastly speeding up linear algebra computations and eliminating the need to recompile R.  Even on a single core, the optimized algorithms in the Intel MKL libraries are faster than using R's standard BLAS library. Open-source R is linked to NetLib's BLAS libraries, but they are not multi-threaded and only use one core. While R's internal BLAS are efficient for most computations, it's possible to recompile R to link to a different, multi-threaded BLAS library to improve performance on eligible calculations. Compiling and linking to R yourself can be involved, but for many, the significantly improved calculation speed justifies the effort. Oracle R Distribution notably simplifies the process of using external math libraries by enabling R to auto-load MKL or ACML. For R commands that don't link to BLAS code, taking advantage of database parallelism using embedded R execution in Oracle R Enterprise is the route to improved performance. For more information about rebuilding R with different BLAS libraries, see the linear algebra section in the R Installation and Administration manual. As always, the Oracle R Distribution is available as a free download to anyone. Questions and comments are welcome on the Oracle R Forum.

    Read the article

  • Making Modular, Reusable and Loosely Coupled MVC Components

    - by Dusan
    I am building MVC3 application and need some general guidelines on how to manage complex client side interaction between my components. Here is my definition of one component in general way: Component which has it's own controller, model and view. All of the component's logic is placed inside these three parts and component is sort of "standalone", it contains it's own form, data needed for interaction, updates itself with Ajax and so on. Beside this internal logic and behavior of the component, it needs to be able to "Talk" to the outside world. By this I mean it should provide data and events (sort of) so when this component gets embedded in pages can notify other components which then can update based on the current state and data. I have an idea to use client ViewModel (in java-script) which would hookup all relevant components on page and control interaction between them. This would make components reusable, modular - independent of the context in which they are used. How would you do this, I am a bit stuck as I do not know if this is a good approach and there is a technical possibility to achieve this using java-script/jquery. The confusing part is about update via Ajax, how to ensure that component is properly linked to ViewModel when component is Ajax updated (or even worse removed or dynamically added). Also, how should this ViewModel be constructed and which technicalities to use here and in components to work as synergy??? On the web, I have found the various examples of the similar approach, but they are oversimplified (even for dummies) or over specific and do not provide valuable resource or general solution for this kind of implementation. If you have some serious examples it would be, also, very helpful. Note: My aim is to make interactions between many components on the same page simpler and more robust and elegant.

    Read the article

  • Data Center Modernization: Harness the power of Oracle Exalogic and Exadata with PeopleSoft

    - by Michelle Kimihira
    Author: Latha Krishnaswamy, Senior Manager, Exalogic Product Management   Allegis Group - a Hanover, MD-based global staffing company is the largest privately held staffing company in the United States with more than 10,000 internal employees and 90,000 contract employees. Allegis Group is a $6+ billion company, offering a full range of specialized staffing and recruiting solutions to clients in a wide range of industries.   The company processes about 133,000 paychecks per week, every week of the year. With 300 offices around the world and the hefty task of managing HR and payroll, the PeopleSoft system at Allegis  is a mission-critical application. The firm is in the midst of a data center modernization initiative. Part of that project meant moving the company's PeopleSoft applications (Financials and HR Modules as well as Custom Time & Expense module) to a converged infrastructure.     The company ran a proof of concept with four different converged architectures before deciding upon Exadata and Exalogic as the platform of choice.   Performance combined with High availability for running mission-critical payroll processes drove this decision.  During the testing on Exadata and Exalogic Allegis applied a particular (11-F) tax update in production environment. What job ran for roughly six hours completed in less than 1.5 hours. With additional tuning the second run of the Tax update 11-F reduced to 33 minutes - a 90% improvement!     Not only that, the move will help the company save money on middleware by consolidating use of Oracle licensing in a single platform.   Summary With a modern data center powered by Exalogic and Exadata to run mission-critical PeopleSoft HR and Financial Applications, Allegis is positioned to manage business growth and improve employee productivity. PeopleSoft applications run on engineered systems platform minimizing hardware and software integration risks. Additional Information Product Information on Oracle.com: Oracle Fusion Middleware Follow us on Twitter and Facebook Subscribe to our regular Fusion Middleware Newsletter

    Read the article

  • Can't install libglib2.0-dev

    - by Joachim Pileborg
    Since Handbrake can't be installed in Oneiric, I decided to try and build it from source instead. The build is interrupted because it complains glib is not installed, so I thought I better install the glib development package. But I cant: $ sudo aptitude install -V libglib2.0-dev The following NEW packages will be installed: libglib2.0-dev{b} [2.30.0-0ubuntu4] 0 packages upgraded, 1 newly installed, 0 to remove and 0 not upgraded. Need to get 1 775 kB of archives. After unpacking 8 831 kB will be used. The following packages have unmet dependencies: libglib2.0-dev: Depends: libglib2.0-0 (= 2.30.0-0ubuntu4) but 2.31.0-0ubuntu1~oneiric1 is installed. Depends: libglib2.0-bin (= 2.30.0-0ubuntu4) but 2.31.0-0ubuntu1~oneiric1 is installed. Internal error: the solver Install(libglib2.0-bin 2.30.0-0ubuntu4 <libglib2.0-dev 2.30.0-0ubuntu4 -> {libglib2.0-bin 2.30.0-0ubuntu4 libglib2.0-bin 2.30.0-0ubuntu4}>) of a supposedly unresolved dependency is already installed in step 237 Aptitude the suggest a solution that involves removing basically all libraries, including libc. How do I install the glib development package?

    Read the article

  • Do you leverage the benefits of the open-closed principle?

    - by Kaleb Pederson
    The open-closed principle (OCP) states that an object should be open for extension but closed for modification. I believe I understand it and use it in conjunction with SRP to create classes that do only one thing. And, I try to create many small methods that make it possible to extract out all the behavior controls into methods that may be extended or overridden in some subclass. Thus, I end up with classes that have many extension points, be it through: dependency injection and composition, events, delegation, etc. Consider the following a simple, extendable class: class PaycheckCalculator { // ... protected decimal GetOvertimeFactor() { return 2.0M; } } Now say, for example, that the OvertimeFactor changes to 1.5. Since the above class was designed to be extended, I can easily subclass and return a different OvertimeFactor. But... despite the class being designed for extension and adhering to OCP, I'll modify the single method in question, rather than subclassing and overridding the method in question and then re-wiring my objects in my IoC container. As a result I've violated part of what OCP attempts to accomplish. It feels like I'm just being lazy because the above is a bit easier. Am I misunderstanding OCP? Should I really be doing something different? Do you leverage the benefits of OCP differently? Update: based on the answers it looks like this contrived example is a poor one for a number of different reasons. The main intent of the example was to demonstrate that the class was designed to be extended by providing methods that when overridden would alter the behavior of public methods without the need for changing internal or private code. Still, I definitely misunderstood OCP.

    Read the article

  • What to expect when creating a style guide?

    - by ted.strauss
    My organization would like to create a full fledged style guide that will be applicable to internal & external web sites, print advertising, trade show design, and overall branding. This article lays out the scope we're aiming for, and has links to many great examples style guide PDFs. The goal is to create a style guide comparable to one of these. I'd like to set realistic expectations within my organization for creating this document. So I have a few of questions pertaining to this: We don't have design staff. Should we be looking for a design firm or freelancer to come in for a 2-6 month contract, or do we need a longer commitment? If we do go with a firm or freelancer, would the pay-scale be comparable to typical design work, or is a style guide a higher order of work? How long should it take a pro to create a style guide? To make estimates more concrete, let's say web only, including all custom graphics. Any red flags to watch out for? (Compare: a new coder who fails to use css properly would be a red flag.)

    Read the article

  • Keep getting smbd errors, but apport asks questions I can't answer

    - by Steve Kroon
    What I want to know below is where to bug report the poor series of questions I'm being asked... Every time I reboot (at least), I get a crash dialog ("Sorry, Ubuntu 12.04 has experienced an internal error"). Clicking on show details shows the problem is with smbd, but the rest of the trace does not appear. When I wish to continue to send a bug report, I am told I will be asked a series of questions in a window titled "Apport". The first question asks: How would you best describe your setup? -I am running a Windows File Server -I am connecting to a Windows File Server. Since I am doing both, I have no idea which to choose. In any case, selecting one leads to the next question: Did this used to work properly with a previous release? -No -Yes But I never tried to use Samba in a previous release, so I can't seriously answer this. After picking an option here, I get more questions in a similar vein. Surely there should be "I don't know/I haven't tried" options, even if they simply mean you can't submit a useful bug report.

    Read the article

  • How can I discourage the use of Access?

    - by Greg Buehler
    Lets pretend that a very large company (revenue numbers with more than 8 figures) is looking to do a refresh on a software system, particularly the dashboard used by employees. This system was originally put together in the early 1990's to handle inventory tracking and storage across a variety of facilities (10+). Since this large company is now in the process of implementing some of these inventory processes with SAP they are in need of a major refresh. The existing system: Microsoft Access project performs dashboard duties Unique shipping/receiving configurations at different facilities require unique forms and queries within the Access project Uses 3rd party libraries referenced by Access to directly interface with at control system (read: motors, conveyors, and counters) Individual SQL Server 2000 instances (some traces of pre-update SQL Server 6.0 documents) at each facility The Issue: This system started as a home brewed inventory tracking scheme with a single internal sponsor who is still in charge of the technical direction. The original sponsor prescribing the desired deliverables that are being called for in the current RFP. The RFP describes a system based around a single Access project. Any suggestion that Access is ill suited for a project of this scope are shot down under the reasoning that "it works for the scope now". Are there any case studies, notices, or statements that can be used to disuade this potential customer from repeating their mistake? Does Microsoft make any statements directly about when it is highly recommended to ditch Access?

    Read the article

  • A Quarter Century of SPARC

    - by kemer
    You might have missed an interesting milestone: the 25th anniversary of SPARC. Twenty-five years! Almost 40% of my life: humbling, maybe a little scary. When I joined Sun Microsystems in 1988, SPARC was just starting to shake things up. The next year we introduced the SPARCstation 1, which had basically triple the performance of our Motrolla-based Sun–3 systems. Not too long after that, our competition began a campaign of “SPARC is dead.” We really distressed them with our success, in spite of our small size. “It won’t last.” “It can’t last!” So they told themselves. For a stroll down memory lane take a look at this page. I remember the sales meeting we had in Atlanta to internally announce the SPARCstation 1. Sun hadn’t really hit the big times, yet. Our much bigger competitors viewed us as an ill-mannered pest, certain of our demise. And, why wouldn’t they be certain: other startups more our size, such as Apollo (remember them?), Silicon Graphics (they fought the good fight!), and the incredibly cool Symbolics are memories. Wait! There was also a BIG company, DEC, who scoffed at us: they are history, too. In fact, we really upset them with what was supposed to be an internal-only video production that was a take-off on Bruce Lee movies, in which we battled the evil Doctor DEC – complete with computer mice (or is that “mouses”?) wielded like nun chucks with the new SPARCstation 1 somehow in the middle of everything. The memory is vivid, but the details hazy. After all, that was almost a quarter century ago. So, here’s to Oracle’s SPARC: still going strong after all these years. – Kemer

    Read the article

  • SEO for maps-based websites that require user interaction

    - by j0nes
    I have a website that basically shows a lot of locations worldwide on a Google Maps like interface. The map itself is built using the Leaflet library and Open Street Map tiles. In the map, I show markers at each location I have. There is a popup window when I click on a marker that shows additional information and contains links to "detail" pages for this location. I fetch the location data for the viewpoint from an AJAX call from my server, so the additional information is not available in the HTML page itself. The detail pages are the pages my users are interested in. My normal users load the map, search the location they are interested in, click on a marker and click on a link in the popup window. However for search engines, this might look different. As this navigation pattern relies on user interaction, I think they might not be able to find the details page. My questions: Are search engines able to follow a navigation path like outlined above? How can I improve the navigation for search engines? (For example showing textual links below the map, sitemaps...) How important are internal links for SEO?

    Read the article

  • Moving all UI logic to Client Side?

    - by Mag20
    Our team originally consisted of mostly server side developers with minimum expertise in Javascript. In ASP.NET we used to write a lot of UI logic in code-behind or more recently through controllers in MVC. A little while ago 2 high level client side developers joined our team. They can do in HTMl/CSS/Javascript pretty much anything that we could previously do with server-side code and server-side web controls: Show/hide controls Do validation Control AJAX refreshing So I started to think that maybe it would be more efficient to just create a high level API around our business logic, kinda like Amazon Fulfillment API: http://docs.amazonwebservices.com/fws/latest/APIReference/, so that client side developers would fully take over the UI, while server side developers would only concentrate on business logic. So for ordering system you would have a high level API like: OrderService.asmx CreateOrderResponse CreateOrder(CreateOrderRequest) AddOrderItem AddPayment - SubmitPayment - GetOrderByID FindOrdersByCriteria ... There would be JSON/REST access to API, so it would be easy to consume from client-side UI. We could use this API for both internal UI development and also for 3-rd parties to create their own applications. With advances in Javascript and availability of good client side developers, is it a good time to get rid of code-behind/controllers and just concentrate on developing high level APIs (ala Amazon) that client side developers can consume?

    Read the article

  • Uses of persistent data structures in non-functional languages

    - by Ray Toal
    Languages that are purely functional or near-purely functional benefit from persistent data structures because they are immutable and fit well with the stateless style of functional programming. But from time to time we see libraries of persistent data structures for (state-based, OOP) languages like Java. A claim often heard in favor of persistent data structures is that because they are immutable, they are thread-safe. However, the reason that persistent data structures are thread-safe is that if one thread were to "add" an element to a persistent collection, the operation returns a new collection like the original but with the element added. Other threads therefore see the original collection. The two collections share a lot of internal state, of course -- that's why these persistent structures are efficient. But since different threads see different states of data, it would seem that persistent data structures are not in themselves sufficient to handle scenarios where one thread makes a change that is visible to other threads. For this, it seems we must use devices such as atoms, references, software transactional memory, or even classic locks and synchronization mechanisms. Why then, is the immutability of PDSs touted as something beneficial for "thread safety"? Are there any real examples where PDSs help in synchronization, or solving concurrency problems? Or are PDSs simply a way to provide a stateless interface to an object in support of a functional programming style?

    Read the article

  • How can I remove duplicate icons for "launched" java programs in the launcher?

    - by Tim
    When launching java programs (like IntelliJ IDEA and Crashplan) in Natty's Unity launcher, duplicate icons are shown (see image). For IntelliJ I created the .desktop file, for Crashplan the .desktop file is supplied with the application. Is there something that can be changed in the .desktop files (or somewhere else) that can prevent this from occurring? I couldn't find a bug report for unity itself but programs like Gnome-Do/Docky have bug reports and had to make internal changes to their applications to prevent this. In this image the 1st icon is the one created from the .desktop file and the second icon is after launching it. Second icon disappears when closing the application. Custom IntelliJ .desktop file #!/usr/bin/env xdg-open [Desktop Entry] Version=1.0 Type=Application Terminal=false Icon[en_US]=/opt/idea/bin/idea128.png Name[en_US]=IntelliJ IDEA Exec=/opt/idea/bin/idea.sh Name=IntelliJ IDEA Icon=/opt/idea/bin/idea128.png StartupNotify=true Crashplan provide .desktop file [Desktop Entry] Version=1.0 Encoding=UTF-8 Name=CrashPlan Categories=; Comment=CrashPlan Desktop UI Comment[en_CA]=CrashPlan Desktop UI Exec=/usr/local/crashplan/bin/CrashPlanDesktop Icon=/usr/local/crashplan/skin/icon_app_64x64.png Hidden=false Terminal=false Type=Application GenericName[en_CA]=

    Read the article

  • Failed Project: When to call it?

    - by Dan Ray
    A few months ago my company found itself with its hands around a white-hot emergency of a project, and my entire team of six pulled basically a five week "crunch week". In the 48 hours before go-live, I worked 41 of them, two back to back all-nighters. Deep in the middle of that, I posted what has been my most successful question to date. During all that time there was never any talk of "failure". It was always "get it done, regardless of the pain." Now that the thing is over and we as an organization have had some time to sit back and take stock of what we learned, one question has occurred to me. I can't say I've ever taken part in a project that I'd say had "failed". Plenty that were late or over budget, some disastrously so, but I've always ended up delivering SOMETHING. Yet I hear about "failed IT projects" all the time. I'm wondering about people's experience with that. What were the parameters that defined "failure"? What was the context? In our case, we are a software shop with external clients. Does a project that's internal to a large corporation have more space to "fail"? When do you make that call? What happens when you do? I'm not at all convinced that doing what we did is a smart business move. It wasn't my call (I'm just a code monkey) but I'm wondering if it might have been better to cut our losses, say we're not delivering, and move on. I don't just say that due to the sting of the long hours--the company royally lost its shirt on the project, plus the intangible costs to the company in terms of employee morale and loyalty were large. Factor that against the PR hit of failing to deliver a high profile project like this one was... and I don't know what the right answer is.

    Read the article

  • JRuby and JVM Languages at JavaOne!

    - by Yolande Poirier
    "My goal with my talks at JavaOne is to teach what is happening at the JVM level and below so people understand better where we are going" explains Charles Nutter, Jruby project lead. In this interview, Charles shared the JRuby features he presented at the JVM Language Summit. They include foreign function interface (FFI), IO layer, character transcoding, regular expressions, compilers, coroutines, and more.  At JavaOne, he will be presenting:  Going Native: Bringing FFI to the JVM The Java Native Runtime (JNR) is a high-speed foreign function interface (FFI) for calling native code from Java without ever writing a line of C. Based on the success of JNR, JDK Enhancement Proposal (JEP) 191 will bring FFI to OpenJDK as an internal API.  The Emerging Languages Bowl: The Big League Challenge In this panel discussion, these emerging languages are portrayed by their respective champions, who explain how they may help your everyday life as a Java developer. Script Bowl 2014: The Battle Rages On In this contest, languages that run on the JVM, represented by their respective language experts, battle for most popular language status by showing off their new features. Audience members will also vote on a language that should not return in 2015. Returning from 2013 are language gurus representing Clojure, Groovy, JRuby, and Scala.

    Read the article

  • Balancing dependency injection with public API design

    - by kolektiv
    I've been contemplating how to balance testable design using dependency injection with providing simple fixed public API. My dilemma is: people would want to do something like var server = new Server(){ ... } and not have to worry about creating the many dependencies and graph of dependencies that a Server(,,,,,,) may have. While developing, I don't worry too much, as I use an IoC/DI framework to handle all that (I'm not using the lifecycle management aspects of any container, which would complicate things further). Now, the dependencies are unlikely to be re-implemented. Componentisation in this case is almost purely for testability (and decent design!) rather than creating seams for extension, etc. People will 99.999% of the time wish to use a default configuration. So. I could hardcode the dependencies. Don't want to do that, we lose our testing! I could provide a default constructor with hard-coded dependencies and one which takes dependencies. That's... messy, and likely to be confusing, but viable. I could make the dependency receiving constructor internal and make my unit tests a friend assembly (assuming C#), which tidies the public API but leaves a nasty hidden trap lurking for maintenance. Having two constructors which are implicitly connected rather than explicitly would be bad design in general in my book. At the moment that's about the least evil I can think of. Opinions? Wisdom?

    Read the article

  • DataContractSerializer truncated string when used with MemoryStream,but works with StringWriter

    - by Michael Freidgeim
    We've used the following DataContractSerializeToXml method for a long time, but recently noticed, that it doesn't return full XML for a long object, but  truncated it and returns XML string with the length of  multiple-of-1024 , but the reminder is not included. internal static string DataContractSerializeToXml<T>(T obj) { string strXml = ""; Type type= obj.GetType();//typeof(T) DataContractSerializer serializer = new DataContractSerializer(type); System.IO.MemoryStream aMemStr = new System.IO.MemoryStream(); System.Xml.XmlTextWriter writer = new System.Xml.XmlTextWriter(aMemStr, null); serializer.WriteObject(writer, obj); strXml = System.Text.Encoding.UTF8.GetString(aMemStr.ToArray()); return strXml; }   I tried to debug and searched Google for similar problems, but didn't find explanation of the error. The most closed http://forums.codeguru.com/showthread.php?309479-MemoryStream-allocates-size-multiple-of-1024-( talking about incorrect length, but not about truncated string.fortunately replacing MemoryStream to StringWriter according to http://billrob.com/archive/2010/02/09/datacontractserializer-converting-objects-to-xml-string.aspxfixed the issue.   1: var serializer = new DataContractSerializer(tempData.GetType());   2: using (var backing = new System.IO.StringWriter())   3: using (var writer = new System.Xml.XmlTextWriter(backing))   4: {   5:     serializer.WriteObject(writer, tempData);   6:     data.XmlData = backing.ToString();   7: }v

    Read the article

  • Hello PCI Council, are you listening?

    - by David Dorf
    Mention "PCI" to any retailer and you'll instantly see them take a deep breath and start looking for the nearest exit.  Nobody wants to be insecure, but few actually believe that PCI does anything more than focus blame directly on retailers.  I applaud PCI for making retailers more aware of the importance of security, but did you have to make them PAINFULLY aware?  POS vendors aren't immune to this pain either as we have to undergo lengthy third-party audits in addition to the internal secure programming programs.  There's got to be a better way. There's a timely article over at StorefrontBacktalk that discusses the inequity of PCI's rules, and also mentions that the PCI Council is accepting comments until April 15th. As a vendor, my biggest issue with PCI is that they require vendors to disclose the details of any breaches, in effect "ratting out" customers.  I don't think its a vendor's place to do this.  I'd rather have the trust of my customers so we can jointly solve the problem. Mary Ann Davidson, Oracle's Chief Security Officer, has an interesting blog posting on this very topic.  Its a bit of a long read, but I found it very entertaining and thought-provoking.  Here's an excerpt: ...heading up the list of “you must be joking” regulations are recent disturbing developments in the Payment Card Industry (PCI) world. I’d like to give [the] PCI kahunas the benefit of the doubt about their intentions, except that efforts by Oracle among others to make them aware of “unfortunate side effects of your requirements” – which is as tactful I can be for reasons that I believe will become obvious below - have gone, to-date, unanswered and more importantly, unchanged. I encourage you to read the entire posting, Pain Comes Instantly, and then provide feedback to the PCI Council.

    Read the article

  • How to get started in coding for JBoss

    - by Mister IT Guru
    I have an idea on how to revamp our internal application, after having accessed the needs of the users, addressing thier current issues, and the like. But I am not a coder. My last application I wrote was in college, in C, (java wasn't invented-ish!) and it was a booking system, with the option to add on other modules, blah blah. I got an A, but I became a system administrator instead, more intrested in designing and maintainend networks and infrastructure, but with the advent of virtualisation, and linux management tools such as puppet I can now manage infrastructure in my sleep! Now I want to write code - to put on my infastructure, and I want to build .... a booking system! This is just to get experience, but I am at a loss as to where to start. Setting up the environment, will take me about a day. Writing the spec, even how I want it to work, I already know, but as for actually coding in a decent manner, I can only guess. If anyone can recommend a book, website, blog, twitter person to follow, or just advice on how to build a kick butt basic jboss app, then please, "I AM READY TO LEARN" :)

    Read the article

  • How to have Windows 7 remember a password for a Domain

    - by Kelly Jones
    About eighteen months ago, I wrote a post covering how to clear saved passwords in Windows XP.  This week at work I was reminded how useful it is to not only deleted saved passwords, but to also setup wildcard credentials using this same interface. The scenario that I run into as consultant working at a client site, is that my laptop is not a member of the Windows Domain that my client uses to secure their network. So, when I need to access file shares, shared printers, or even the clients internal websites, I’m prompted for a name and password.  By creating a wildcard entry on my laptop (for the user account that the client issued to me), I avoid this prompt and can seamlessly access these resources.  (This also works when you’ve configured Outlook to access Exchange via RPC over HTTP.) How to create a credential wild card entry in Windows 7: Go to your Start Menu --> Type "user" into the Search box Click on the “Manage your credentials” in the column on the left Click on the “Add a Windows credential” link Enter the Domain (in my case my client’s domain), something like this: *.contoso.com Enter the username and password That’s it.  You should now be able to access resources in that Domain without being prompted for your name and password.  Please note: if you are required to change your password periodically for that domain, you’ll need to update your saved password as well.

    Read the article

  • How to efficiently protect part of an application with a license

    - by Patrick
    I am working on an application that has many functional parts. When a customer buys the application, he buys the standard functionality, but he can also buy some additional elements of the application for an additional price. All of the elements are part of the same application executable. A license key is used to indicate which of the elements should be accessible in the application. Some of the elements can be easily disabled if the user didn't pay for it. These are typically the modules that you can access via the application's menu. However, some elements give more problems: What if a part of the data model is related to an optional part? Do I build up these data structures in my application so the rest of my application can just assume they're always there? Or do I don't build them, and add checks in the rest of may application? What if some optional part is still useful to perform some internal tasks, but I don't want to expose it to the user externally? What if the marketing responsible wants to make a standard part now an optional part? In all of my application I assume that that part is present, but if it becomes optional, I should add checks on it everywhere in the application. I have some ideas on how to solve some of the problems (e.g. interfaces with dual implementations: one working implementation, and one that is activated if the optional part is not activated). Do you know of any patterns that can be used to solve this kind of problem? Or do you have any suggestions on how to handle this licensing problem? Thanks.

    Read the article

  • Facebook Payments & Credits vs. Real-World & Charities

    - by Adam Tannon
    I am having a difficult time understanding Facebook's internal "e-commerce microcosm" and what it allows Facebook App developers to do (and what it restricts them from doing). Two use cases: I'm an e-com retailer selling clothes and coffee mugs (real-world goods) on my website; I want to write a Facebook App that allows Facebook users to buy my real-world goods from inside of Facebook using real money ($ USD) I'm highschool student trying to raise money for my senior class trip and want to build a Facebook App that allows Facebook users to donate to our class using real money ($ USD) Are these two scenarios possible? If not, why (what Facebook policies prohibit me from doing so)? If so, what APIs do I use: Payments or Credits? And how (specifically) would it work? Do Facebook Users have to first buy "credits" (which are mapped to $ USD values under the hood) and pay/donate with credits, or can they whip out their credit card and pay/donate right through my Facebook App? I think that last question really summarizes my confusion: can Facebook users enter their credit card info directly into Facebook Apps, or do you have to go through Payments/Credits APIs as a "middleman"?

    Read the article

  • Object oriented wrapper around a dll

    - by Tom Davies
    So, I'm writing a C# managed wrapper around a native dll. The dll contains several hundred functions. In most cases, the first argument to each function is an opaque handle to a type internal to the dll. So, an obvious starting point for defining some classes in the wrapper would be to define classes corresponding to each of these opaque types, with each instance holding and managing the opaque handle (passed to its constructor) Things are a little awkward when dealing with callbacks from the dll. Naturally, the callback handlers in my wrapper have to be static, but the callbacks arguments invariable contain an opaque handle. In order to get from the static callback back to an object instance, I've created a static dictionary in each class, associating handles with class instances. In the constructor of each class, an entry is put into the dictionary, and this entry is then removed in the Destructors. When I receive a callback, I can then consult the dictionary to retrieve the class instance corresponding to the opaque reference. Are there any obvious flaws to this? Something that seems to be a problem is that the existence static dictionary means that the garbage collector will not act on my class instances that are otherwise unreachable. As they are never garbage collected, they never get removed from the dictionary, so the dictionary grows. It seems I might have to manually dispose of my objects, which is something absolutely would like to avoid. Can anyone suggest a good design that allows me to avoid having to do this?

    Read the article

< Previous Page | 153 154 155 156 157 158 159 160 161 162 163 164  | Next Page >