Search Results

Search found 5809 results on 233 pages for 'isolated storage'.

Page 170/233 | < Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >

  • Copy to USB memory stick really slow?

    - by Eloff
    When I copy files to the USB device, it takes much longer than in windows (same usb device, same port) it's faster than USB 1.0 speeds (1MB/s) but much slower than USB 2.0 speeds (12MB/s). To copy 1.8GB takes me over 10 minutes (it should be < 3 min.) I have two identical SanDisk Cruzer 8GB sticks, and I have the same problem with both. I have a super talent 32GB USB SSD in the neighboring port and it works at expected speeds. The problem I seem to see in the GUI is that the progress bar goes to 90% almost instantly, completes to 100% a little slower and then hangs there for 10 minutes. Interrupting the copy at this point seems to result in corruption at the tail end of the file. If I wait for it to complete the copy is successful. Any ideas? dmesg output below: [64059.432309] usb 2-1.2: new high-speed USB device number 5 using ehci_hcd [64059.526419] scsi8 : usb-storage 2-1.2:1.0 [64060.529071] scsi 8:0:0:0: Direct-Access SanDisk Cruzer 1.14 PQ: 0 ANSI: 2 [64060.530834] sd 8:0:0:0: Attached scsi generic sg4 type 0 [64060.531925] sd 8:0:0:0: [sdd] 15633408 512-byte logical blocks: (8.00 GB/7.45 GiB) [64060.533419] sd 8:0:0:0: [sdd] Write Protect is off [64060.533428] sd 8:0:0:0: [sdd] Mode Sense: 03 00 00 00 [64060.534319] sd 8:0:0:0: [sdd] No Caching mode page present [64060.534327] sd 8:0:0:0: [sdd] Assuming drive cache: write through [64060.537988] sd 8:0:0:0: [sdd] No Caching mode page present [64060.537995] sd 8:0:0:0: [sdd] Assuming drive cache: write through [64060.541290] sdd: sdd1 [64060.544617] sd 8:0:0:0: [sdd] No Caching mode page present [64060.544619] sd 8:0:0:0: [sdd] Assuming drive cache: write through [64060.544621] sd 8:0:0:0: [sdd] Attached SCSI removable disk

    Read the article

  • Provide an OnChange event for an internal property which is controlled externally?

    - by NGLN
    For fun and by request I am updating this ImageGrid component, a kind of listbox for images that has a FileNames property of type TStrings. For ease of writing, I have been misusing its FileNames.Objects property for bitmap storage. But since the TStrings type suggests that users of the component could or would want to use the Objects property for custom data, e.g. like TListBox.Items, I am rewriting the component to store the bitmaps elsewhere and leave FileNames.Objects untouched for unknown future usage. Now I am wondering whether to provide an OnChange event. And if so, whether to fire it when one or more FileNames.Objects changes. Trying to answer it myself, I dove in Delphi's own VCL and stumbled on: TMemo: has an OnChange event, but ignores Lines.Objects TListBox: has no OnChange event, but is capable of storing Items.Objects TStringGrid: has no OnChange event, but is capable of storing Objects, Rows.Objects, Cols.Objects So now I am somewhat puzzeled, because I cannot imagine Borland's developers didn't add events for several Objects properties out of ease. Sure, when a user changes a FileNames.Object in my component, he knows he does and could implement appropriate interaction himself. But wouldn't it be convenient when the component does automatically? What would you expect from this component in this regard?

    Read the article

  • Trade In, Trade Up Promotion: SPARC Consolidation Now Through May 31st

    - by swalker
    Dear Partner, Installed Base Business (IBB) technology refresh is one of the most important activities for Oracle, for you and for your customers. It allows your existing customers to benefit from the most up-to-date, best-of-breed Oracle products. And it’s an exciting time to perform a technology refresh: a new SPARC promotion is available now, closing 31st May 2012. Customers trading in older SPARC systems and upgrading to a new SPARC SuperCluster T4-4 or SPARC Enterprise M8000/M9000 can get $4,000 per CPU. Discount is pre-approved and upfront (maximum discounts apply). The major highlights are as follows: Targeted Systems: Upgrade to SPARC M8000, M9000, SuperCluster Qualified installed base upgrade from: All older-generations of SPARC systemsPromotional offer: Trade-in Value: $4K per CPU Pre-approved maximum discount (including trade-in) not to exceed 60% on M8/9000 systems and 25% on SuperCluster No-cost dock-to-dock shipping, and environmentally safe disposal of the returned hardware through Oracle best-of-class recycling processes. Recommendations: We recommend you to take the following actions: As usual, please register your opportunities in OMM When you do so, please make sure you place the following Campaign Names in the “Marketing Initiative” field of OMM: Campaign Name : EMEA_Tech Refresh-IBB Campaign_12H1_Follow Up_O For all the details: Please view rules, and FAQs. For more information, please visit the Promo Partner Site here. For more information on IBB and the Oracle Upgrade Advantage Program (UAP):http://www.oracle.com/us/products/servers-storage/upgrade-advantage-program/index.html http://www.oracle.com/partners/secure/sales/oracle-ibb-program-for-partners-184291.html Contacts: For questions, please contact your favorite Oracle Partner Account Manager.

    Read the article

  • Series On Embedded Development (Part 3) - Runtime Optionality

    - by Darryl Mocek
    What is runtime optionality? Runtime optionality means writing and packaging your code in such a way that all of the features are available at runtime, but aren't loaded and used if the feature isn't used. The code is separate, and you can even remove the code to save persistent storage if you know the feature will not be used. In native programming terms, it's splitting your application into separate shared libraries so you only have to load what you're using, which means it only impacts volatile memory when enabled at runtime. All the functionality is there, but if it's not used at runtime, it's not loaded. A good example of this in Java is JVMTI, Java's Virtual Machine Tool Interface. On smaller, embedded platforms, these libraries may not be there. If the libraries are not there, there's no effect on the runtime as long as you don't try to use the JVMTI features. There is a trade-off between size/performance and flexibility here. Putting code in separate libraries means loading that code will take longer and it will typically take up more persistent space. However, if the code is rarely used, you can save volatile memory by including it in a separate library. You can also use this method in Java by putting rarely-used code into one or more separate JAR's. Loading a JAR and parsing it takes CPU cycles and volatile memory. Putting all of your application's code into a single JAR means more processing for that JAR. Consider putting rarely-used code in a separate library/JAR.

    Read the article

  • Great tools to demonstrate and showcase the ODA appliance

    - by user12842161
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:Calibri; mso-fareast-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} 1. Introduction to the Oracle Database Appliance Video http://www.youtube.com/watch?v=DU0hCO_-q-8 2. Oracle Database Appliance Configurator (run in standalone mode on your PC for mode) http://www.oracle.com/technetwork/server-storage/database-appliance/oracle-database-appliance-manager-1352946.zip 3. Oracle Database Appliance 3D Demo http://oracle.com.edgesuite.net/producttours/3d/databaseappliance/

    Read the article

  • How did we get saddled with the (hierarchical) filesystem as the basic data structure?

    - by user1936
    I'm self-taught and I don't have a CS degree. The more I've been learning about data structure, the more I wonder, in this day and age, how are we still saddled with the filesystem, with directories and files, as the basic data storage structure on the OS? I understand the simplicity of it, but it seems nowadays that there could be more options available natively. As far as I'm aware, the only project to improve the basic functionality of the filesystem was ReiserFS, where you could tell what line of a file was changed by whom, and when. For instance, if I could have native tagging for files, where I could tag images, diagrams, word-processing documents, an entire code repository, all as belonging to a single project, that would really be helpful to me. Since I'm stuck in the filesystem paradigm, I know that I could put all those into a single folder/directory, but what if they already exist in disparate directories, and they need to stay there? I know there are programs out there that can do this, but why aren't they on the filesystem? Something that would be nice to have is some kind of relational feature in the filesystem, like you get with RDBMSes. I understand that that was supposed to be part of Vista/7, but that fell off the feature list too. Sure, any program can store a binary file and have any data structure it wants in it, by why couldn't the OS offer more complex ways of storing data, beyond the simple heirarchy of the filesystem?

    Read the article

  • Brownfield Support for OVMSS

    - by Owen Allen
    The area of virtualization saw quite a few enhancements with version 12.2. There's one particular virtualization enhancement that can make a big difference for a lot of people: support for brownfield Oracle VM Servers for SPARC. Brownfield refers to Oracle VM Servers for SPARC that were created outside of Ops Center. In older versions of Ops Center, you couldn't really do anything with them - Ops Center could only manage OVM Servers that it created. If you had OVM Servers outside of Ops Center, you'd have to recreate them if you wanted to manage them. In 12.2, though, this problem is cleared up. You can discover and manage OVM Servers for SPARC that you created outside of Ops Center, so long as the LDom Manager is running. When you discover the control domain, all of the logical domains are automatically discovered and managed and appear under the control domain in the Asset tree. If you want to use server pools and migrate the logical domains to a different Oracle VM Server for SPARC system, you'll need to move the metadata to a shared library and use shared Fibre Channel or iSCSI LUNs for the guest domain storage and add the server to a server pool. See the Oracle VM Server for SPARC chapter for more information.

    Read the article

  • Amazon EC2 vs Dedicated server at Hetzner, what's the use for EC2?

    - by C-Blu
    After searching the web I still can't find the reason to use EC2. What's the point to scale EC2? If you expect a huge burst in traffic, they say. OK, but what if you already have a couple of sites with good traffic, and for example medium reserved EC2 instance is not enough. You are paying $36.60(medium reserved for 1year) in EU(Ireland) + traffic + optional expenses for databases and S3 if you use them. Of course as some point when you are under $56.6-$66.1 you can optimize your hosting costs with Amazon EC2. But when you get at some point if purchase EX4 server from Hetzner, it will surpass your perfomance needs for a long time, before you get a massive traffic. (I am wrong?) CPU: i7-2600 Quadcore (3.4-3.8 Ghz) RAM: 16 GB HDD: 2x3 TB SATA (6 Gbit/s) - I think that disc performance of a dedicated is better then of Amazon EBS Traffic: 10 TiB in month included. This is what you get from Hetzner for $56(- 19% VAT) or $66 for EU residents. Please, tell me what's the reason to use Amazon? Which load won't a server from Hetzner take, but Amazon Auto Scaling will? The maintenance of dedicated vs EC2 is still the same? Or hardware failure at Amazon, won't ruin your EBS storage? I'm still not at the level when I need expensive hosting, but want to know beforehand, just to be sure if Amazon infrastructure is better then pure performance of Hetzner's hardware.

    Read the article

  • Multitenancy in SQL Azure

    - by cibrax
    If you are building a SaaS application in Windows Azure that relies on SQL Azure, it’s probably that you will need to support multiple tenants at database level. This is short overview of the different approaches you can use for support that scenario, A different database per tenant A new database is created and assigned when a tenant is provisioned. Pros Complete isolation between tenants. All the data for a tenant lives in a database only he can access. Cons It’s not cost effective. SQL Azure databases are not cheap, and the minimum size for a database is 1GB.  You might be paying for storage that you don’t really use. A different connection pool is required per database. Updates must be replicated across all the databases You need multiple backup strategies across all the databases Multiple schemas in a database shared by all the tenants A single database is shared among all the tenants, but every tenant is assigned to a different schema and database user. Pros You only pay for a single database. Data is isolated at database level. If the credentials for one tenant is compromised, the rest of the data for the other tenants is not. Cons You need to replicate all the database objects in every schema, so the number of objects can increase indefinitely. Updates must be replicated across all the schemas. The connection pool for the database must maintain a different connection per tenant (or set of credentials) A different user is required per tenant, which is stored at server level. You have to backup that user independently. Centralizing the database access with store procedures in a database shared by all the tenants A single database is shared among all the tenants, but nobody can read the data directly from the tables. All the data operations are performed through store procedures that centralize the access to the tenant data. The store procedures contain some logic to map the database user to an specific tenant. Pros You only pay for a single database. You only have a set of objects to maintain and backup. Cons There is no real isolation. All the data for the different tenants is shared in the same tables. You can not use traditional ORM like EF code first for consuming the data. A different user is required per tenant, which is stored at server level. You have to backup that user independently. SQL Federations A single database is shared among all the tenants, but a different federation is used per tenant. A federation in few words, it’s a mechanism for horizontal scaling in SQL Azure, which basically uses the idea of logical partitions to distribute data based on certain criteria. Pros You only have a single database with multiple federations. You can use filtering in the connections to pick the right federation, so any ORM could be used to consume the data. Cons There is no real isolation at that database level. The isolation is enforced programmatically with federations.

    Read the article

  • Dual Booting Windows 7 and Ubuntu 12.04. Partition Sizes?

    - by John F.
    I'm about to reinstall Windows, so I thought that I'd try Ubuntu out on a partition just for fun. My question is, how large should my partitions be for each of them? I know this various depending on what you use, so i'll give you a general idea of what I have, and what I have in mind. I'm currently running: Windows 7 Professional (64bit) RAM: 4GB CPU: 2.5Ghz Quad Core processor HDD: 500GB GPU: 1GB Nvidia GeForce I have around 130GB in Steam games, and some heavier applications like Photoshop CS6, Sony Vegas Pro 11. But other Applications I use are: Chrome Skype Dxtory Fraps OpenOffice BitTorrent and other assorted smaller programs. So, I was thinking that I would give my Windows partition about 150-200GB, my Ubuntu Partition around 20GB, and the rest to shared storage. I'm not really sure if I'd need more or less on Ubuntu, because I've never used it and I'm not really sure what kind of apps i'd be using over there. This would also be a clean install, so I'd be wiping my HDD, creating the Partitions in GParted, then installing Windows with Ubuntu following that. Any critique you could give me? Maybe explanations to what the /root, /boot and /home partitions I hear are about? Thanks in advanced if you actually read this lengthy thing! Any help is appreciated. (x

    Read the article

  • Globacom and mCentric Deploy BDA and NoSQL Database to analyze network traffic 40x faster

    - by Jean-Pierre Dijcks
    In a fast evolving market, speed is of the essence. mCentric and Globacom leveraged Big Data Appliance, Oracle NoSQL Database to save over 35,000 Call-Processing minutes daily and analyze network traffic 40x faster.  Here are some highlights from the profile: Why Oracle “Oracle Big Data Appliance works well for very large amounts of structured and unstructured data. It is the most agile events-storage system for our collect-it-now and analyze-it-later set of business requirements. Moreover, choosing a prebuilt solution drastically reduced implementation time. We got the big data benefits without needing to assemble and tune a custom-built system, and without the hidden costs required to maintain a large number of servers in our data center. A single support license covers both the hardware and the integrated software, and we have one central point of contact for support,” said Sanjib Roy, CTO, Globacom. Implementation Process It took only five days for Oracle partner mCentric to deploy Oracle Big Data Appliance, perform the software install and configuration, certification, and resiliency testing. The entire process—from site planning to phase-I, go-live—was executed in just over ten weeks, well ahead of the four months allocated to complete the project. Oracle partner mCentric leveraged Oracle Advanced Customer Support Services’ implementation methodology to ensure configurations are tailored for peak performance, all patches are applied, and software and communications are consistently tested using proven methodologies and best practices. Read the entire profile here.

    Read the article

  • New SPC2 benchmark- The 7420 KILLS it !!!

    - by user12620172
    This is pretty sweet. The new SPC2 benchmark came out last week, and the 7420 not only came in 2nd of ALL speed scores, but came in #1 for price per MBPS. Check out this table. The 7420 score of 10,704 makes it really fast, but that's not the best part. The price one would have to pay in order to beat it is ridiculous. You can go see for yourself at http://www.storageperformance.org/results/benchmark_results_spc2The only system on the whole page that beats it was over twice the price per MBPS. Very sweet for Oracle. So let's see, the 7420 is the fastest per $. The 7420 is the cheapest per MBPS. The 7420 has incredible, built-in features, management services, analytics, and protocols. It's extremely stable and as a cluster has no single point of failure. It won the Storage Magazine award for best NAS system this year. So how long will it be before it's the number 1 NAS system in the market? What are the biggest hurdles still stopping the widespread adoption of the ZFSSA? From what I see, it's three things: 1. Administrator's comfort level with older legacy systems. 2. Politics 3. Past issues with Oracle Support.   I see all of these issues crop up regularly. Number 1 just takes time and education. Number 3 takes time with our new, better, and growing support team. many of them came from Oracle and there were growing pains when they went from a straight software-model to having to also support hardware. Number 2 is tricky, but it's the job of the sales teams to break through the internal politics and help their clients see the value in oracle hardware systems. Benchmarks like this will help.

    Read the article

  • When should I use a Process Model versus a Use Case?

    - by Dave Burke
    This Blog entry is a follow on to https://blogs.oracle.com/oum/entry/oum_is_business_process_and and addresses a question I sometimes get asked…..i.e. “when I am gathering requirements on a Project, should I use a Process Modeling approach, or should I use a Use Case approach?” Not surprisingly, the short answer is “it depends”! Let’s take a scenario where you are working on a Sales Force Automation project. We’ll call the process that is being implemented “Lead-to-Order”. I would typically think of this type of project as being “Process Centric”. In other words, the focus will be on orchestrating a series of human and system related tasks that ultimately deliver value to the business in a cost effective way. Put in even simpler terms……implement an automated pre-sales system. For this type of (Process Centric) project, requirements would typically be gathered through a series of Workshops where the focal point will be on creating, or confirming, the Future-State (To-Be) business process. If pre-defined “best-practice” business process models exist, then of course they could and should be used during the Workshops, but even in their absence, the focus of the Workshops will be to define the optimum series of Tasks, their connections, sequence, and dependencies that will ultimately reflect a business process that meets the needs of the business. Now let’s take another scenario. Assume you are working on a Content Management project that involves automating the creation and management of content for User Manuals, Web Sites, Social Media publications etc. Would you call this type of project “Process Centric”?.......well you could, but it might also fall into the category of complex configuration, plus some custom extensions to a standard software application (COTS). For this type of project it would certainly be worth considering using a Use Case approach in order to 1) understand the requirements, and 2) to capture the functional requirements of the custom extensions. At this point you might be asking “why couldn’t I use a Process Modeling approach for my Content Management project?” Well, of course you could, but you just need to think about which approach is the most effective. Start by analyzing the types of Tasks that will eventually be automated by the system, for example: Best Suited To? Task Name Process Model Use Case Notes Manage outbound calls Ö A series of linked human and system tasks for calling and following up with prospects Manage content revision Ö Updating the content on a website Update User Preferences Ö Updating a users display preferences Assign Lead Ö Reviewing a lead, then assigning it to a sales person Convert Lead to Quote Ö Updating the status of a lead, and then converting it to a sales order As you can see, it’s not an exact science, and either approach is viable for the Tasks listed above. However, where you have a series of interconnected Tasks or Activities, than when combined, deliver value to the business, then that would be a good indicator to lead with a Process Modeling approach. On the other hand, when the Tasks or Activities in question are more isolated and/or do not cross traditional departmental boundaries, then a Use Case approach might be worth considering. Now let’s take one final scenario….. As you captured the To-Be Process flows for the Sales Force automation project, you discover a “Gap” in terms of what the client requires, and what the standard COTS application can provide. Let’s assume that the only way forward is to develop a Custom Extension. This would now be a perfect opportunity to document the functional requirements (behind the Gap) using a Use Case approach. After all, we will be developing some new software, and one of the most effective ways to begin the Software Development Lifecycle is to follow a Use Case approach. As always, your comments are most welcome.

    Read the article

  • TCO Models: Oracle invites you to the next webcast dedicated to Partners on Oracle Hardware and Solutions

    - by mseika
    The Oracle Hardware and Solutions Webcast Series is a sequence of one-hour Sales-Oriented Web Seminars, dedicated to Oracle Partners. This is your opportunity to learn about Oracle's Servers, Storage and Solutions strategy and portfolio, and their business value for you and your customers. The next appointment is for Wednesday, June 22, at 9am UKT / 10am CET:Total Cost of Ownership Models Examples: How we can sell Oracle HW and SW together, with better TCO, and benefits for both Oracle and the Partner!with Ilkka Vanhanen, Business Development Manager, Oracle EMEA Hardware Strategy Organization, who will talk about:   Concrete examples of the HW/SW TCO Model   Understand the competitive advantage provided by Oracle-on-Oracle Solutions   How Partners get new business from Oracle-on-Oracle Solutions   How Oracle is changing the game in Enterprise IT   Key benefits for both SW Resellers and HW Resellers   To register click here , then click on the “Servers and Solutions Webcasts” tab.          

    Read the article

  • not being able to access any sudo function on my pc

    - by explorex
    Hi, I am not being to access any functions in my desktop and I don't have an OS besides Ubuntu 10.04 Lucid Linux and I am new to ubuntu. I think I rebooted my computer thinking that Google Chrome crashed. I opened Google Chrome but it showed opening message but never opened so I restarted my computer. and when my system was loading ('i was playing with keyboard dont know what I typed') and when by ubutnu loaded, I was unable to access anything some of characteristics are listed below I cannot hear any sound I cannot access wired ethernet connection on the right corner where I usually enable to access interne and I have no internet. There is no local apache server either. when ever I try to start apacer I get setuid must be root or something. When I type sudo then I get message setuid must be root. I cannot access orther external storage devices like pendrive and portable hard drive and cannot mount my other drives with FAT32 filesystem. When I try to start my apache webserver with out typing sudo then I get message cannnot open socket or something like it. EDIT:: i remember also doing command chown -R www-data / earlier and got error message EDIT:: and i cannot shutdown my computer, it only logs off

    Read the article

  • TechEd 2012: Fast SQL Server

    - by Tim Murphy
    While I spend a certain amount of my time creating databases (coding around SQL Server and setup a server when I have to) it isn’t my bread and butter.  Since I have run into a number of time that SQL Server needed to be tuned I figured I would step out of my comfort zone and see what I can learn. Brent Ozar packed a mountain of information into his session on making SQL Server faster.  I’m not sure how he found time to hit all of his points since he was allowing the audience abuse him on Twitter instead of asking questions, but he managed it.  I also questioned his sanity since he appeared to be using a fruit laptop. He had my attention though when he stated that he had given up on telling people to not use “select *”. He posited that it could be fixed with hardware by caching the data in memory.  He continued by cautioning that having too many indexes could defeat this approach.  His logic was sound if not always practical, but it was a good place to start when determining the trade-offs you need to balance.  He was moving pretty fast, but I believe he was prescribing this solution predominately for OLTP database prior to moving on to data warehouse solutions. Much of the advice he gave for data warehouses is contained in the Microsoft Fast Track guidance so I won’t rehash it here.  To summarize the solution seems to be the proper balance memory, disk access speed and the speed of the pipes that get the data from storage to the CPU.  It appears to be sound guidance and the session gave enough information that going forward we should be able to find the details needed easily.  Just what the doctor ordered. del.icio.us Tags: SQL Server,TechEd,TechEd 2012,Database,Performance Tuning

    Read the article

  • is Java free for mobile development?

    - by exTrace101
    Q1. I would like to know if it's free for a developer (I mean, if I have to pay no royalties to Sun/Oracle) to develop (Android) mobile apps in Java? After reading this snippet about use of Java field, I'm getting the impression that Java is not free for mobile development, is that right? .."General Purpose Desktop Computers and Servers" means computers, including desktop and laptop computers, or servers, used for general computing functions under end user control (such as but not specifically limited to email, general purpose Internet browsing, and office suite productivity tools). The use of Software in systems and solutions that provide dedicated functionality (other than as mentioned above) or designed for use in embedded or function-specific software applications, for example but not limited to: Software embedded in or bundled with industrial control systems, wireless mobile telephones, wireless handheld devices, netbooks, kiosks, TV/STB, Blu-ray Disc devices, telematics and network control switching equipment, printers and storage management systems, and other related systems are excluded from this definition and not licensed under this Agreement... and from http://www.excelsiorjet.com/embedded/ Notice : The Java SE Embedded technology license currently prohibits the use of Java SE in cell phones. Q2. how come these plethora of Android Java developers aren't paying Sun/Oracle a dime?

    Read the article

  • Creating deterministic key pairs in javascript for use in encrypting/decrypting/signing messages

    - by SlickTheNick
    So I have been searching everywhere and havn't been able to find anything with the sufficient information I need.. so Im a bit stumped on this one at the moment What I am trying to do is create a public/private key pair (like PGP) upon a users account creation, based on their passphrase and a random seed. The public key would be saved on the server, and ideally the private key would never be seen by the server whatsoever. The user could then sign in, and send a message to another user. Before the message is sent, the senders key pair would be re-generated on the fly based on their credentials (and maybe a password prompt) and used to encrypt the message. The receiver would then use their own re-generated private key to decrypt said message. The server itself should never see any plaintext passwords, private keys or readable messages. Bit unsure how on how I could go about implementing this. Iv been looking into PGP, specifically openPGP.js. The main trouble I am having is being able to regenerate the key-pair based off a specific seed. PGP seems to have a random output even if the inputs are the same. Storing the private key in a cookie or in HTML5 storage or something also isnt really an option, too unreliable. Can anyone point me in the right direction?

    Read the article

  • Free Xsigo Technical Pre-sales workshop for Selected Partners !

    - by mseika
    In 2012 Oracle acquired Xsigo, a developer of network I/O virtualisation solutions. This acquisition compliments Oracle’s extensive virtualisation portfolio. With Oracle Virtual Networking products (Xsigo) you can: Virtualise connectivity from any server to any storage and any network. Reduce datacentre complexity by 70% Cut infrastructure expenses by up to 50% Benefits to Channel Partners: Offer a unique proposition that your competitors can’t match. Provide an innovative solution that delivers more performance at less cost. High margins that help sell more products and services. This course is aimed at Technical Pre-Sales Consultants equipping them to provide detailed demos, and architect RFP feedback and customer solutions. The language of this event is French. WHEN24th September 2013 WHEREOracle France 15, boulevard Charles De Gaulle92715 COLOMBES FEESFree of charge 09.00: Welcome, Coffee & Introduction 09.30: Value Propositions, Architecture & Use Cases 11.30: Build a OVN Web Quote & TCO 12.30: Lunch 13.30: Competitive Summary 14.00: Design Scenario Workshop 15.45: Questions/Opportunities  REGISTRATION: Register via this link as soon as possible, 14th june, latest. Note that we have only 20 seats in total for this event. Note that after 14th june we will release free seats for other organizations to register. We look forward to your participation! What we expect from you: You will bring your own laptop. Recommended browser is Firefox 10 ESR. You have checked the material and conducted the assessments. You will be flexible in terms of Agenda and Progress as we intend this to be more of a Workshop having Dialogue rather than sticking tightly into the tentative timeline. What this is not: This PartnerLab does not replace Oracle University Trainings. This PartnerLab does not lead to a Certification as such. This PartnerLab does not enable Partners to full and complete implementation skills.

    Read the article

  • How can I fix the #c3284d# malvertising hack on my website?

    - by crm
    For the past couple of weeks at semi regular intervals, this website has had the #c3284d# malware code inserted into some of its .php files. Also the .htaccess file had its equivelant code inserted. I have, on many occasions removed the malicious code, replaced files, changed the ftp password on my ftp client (which is CoreFTP), changed the connection method to FTPS for more secure storage of the password (instead of plain text). I have also scanned my computer several times using AVG and Windows Defender which have found no malware on my computer which might have been storing my ftp passwords. I used Sucuri SiteCheck to check my website which says my website is clean of malware which is bizarre because I just attempted to click one of the links on the site a minute ago and it linked me to another one of these random stats.php sites, even though it appears I have gotten rid of the #c3284d# code again (which will no doubt be re-inserted somehow in an hour or so).. Has anyone found an actual viable solution for this malware hack? I have done just about all of the things suggested here and here and the problem still persists. Currently when I click on a link within the sites navigation menu within Google Chrome I get googles Malware warning page: Warning: Something's Not Right Here! oxsanasiberians.com contains malware. Your computer might catch a virus if you visit this site. Google has found that malicious software may be installed onto your computer if you proceed. If you've visited this site in the past or you trust this site, it's possible that it has just recently been compromised by a hacker. You should not proceed. Why not try again tomorrow or go somewhere else? We have already notified oxsanasiberians.com that we found malware on the site. For more about the problems found on oxsanasiberians.com, visit the Google Safe Browsing diagnostic page. I'm wondering if it is possible that the Google Chrome browser I am using has itself been hacked? Does anyone else get re-directed when clicking links on the the website?

    Read the article

  • ADF - Now with Robots!

    - by Duncan Mills
    I mentioned this briefly in a tweet the other day, just before the full rush of OOW really kicked off, so I though it was worth re-visiting. Check out this video, and then read on: So why so interesting? Well - you probably guessed from the title, ADF is involved. Indeed this is as about as far from the traditional ADF data entry application as you can get. Instead of a database at the back-end there's basically a robot. That's right, this remarkable tape drive is controlled through an ADF using all your usual friends of ADF Faces, Controller and Binding (but no ADFBC for obvious reasons). ADF is used both on the touch screen you see on the front of the device in the video, and also for the remote management console which provides a visual representation of the slots and drives. The latter uses ADF's Active Data Framework to provide a real-time view of what's going on the rack. . What's even more interesting (for the techno-geeks) is the fact that all of this is running out of flash storage on a ridiculously small form factor with tiny processor - I probably shouldn't reveal the actual specs but take my word for it, don't complain about the capabilities of your laptop ever again! This is a project that I've been personally involved in and I'm pumped to see such a good result and,  I have to say, those hardware guys are great to work with (and have way better toys on their desks than we do). More info in the SL150 (should you feel the urge to own one) is here. 

    Read the article

  • Slightly off topic - How to Fix Sky Go Error [t6013-c1501] (and [t6000-c1501])

    - by bconlon
    Sky doesn't seem to understand what their own errors mean, so I cobbled together an understanding from some other posts and managed to get it working.When you see the error [t6013-c1501] instead of your TV programme in Sky Go, it seems to mean:'You registered a device, but then changed the hardware, so now I'm confused!'In other words, the Digital rights management (DRM) used between Sky Go and Silverlight stored an old fingerprint of your PC, but rather than recognising this and allowing you to remove the device, it just disappears from the 'Manage Devices' page.DISCLAIMER: Perform the following steps at your own risk. It worked for me, but I didn't care if it broke stuff. If you care....don't do it!So, to fix this I did the following:1. Login to Sky Go and click 'Watch live TV' from the home page. It will attempt to show Sky News and fail with the error [t6013-c1501].2. Right click on the error and you should see the Menu option 'Silverlight'. Select this and a dialog should appear. Click the 'Application Storage' tab and delete any entry that relates to sky go. Clcik OK to close the dialog.3. Open explorer and navigate to the folder C:\ProgramData\Microsoft\PlayReady4. Rename the file mspr.hds to mspr.hds.OLD5. Go back to the browser and click F5. You may need to logout/login (not sure).Note: Don't rename/delete the folder C:\ProgramData\Microsoft\PlayReady or you will get the error [t6000-c1501]. The folder must exist in order for the new file to be created by Silverlight. Techie talk:So whoever wrote the code to create a new mspr.hds file didn't write code to check the folder existed causing what I assume is a generic error t6000, probably something like:catch (Exception ex) { WriteToLog("Oops, something broke!"); }#

    Read the article

  • What strategy should be employed to access Facebook data offline?

    - by user686021
    I'm working on a project similar to Klout which provides detail about how you influence other people and who influenced you. We'll be fetching data from few social networking sites (i.e linked in, facebook, twitter etc) to analyze how users interacts with one another. For that we need to parse the data and store it in db and have to analyze it so that strength of relation of two user can be decided. We'll be accessing data offline as well to provide them with accurate results. If we consider facebook activities, we need to have access to Facebook users' news feed, wall data which includes likes,comments,shares etc. To decide how one user influence other, we'll store all the data and analyze it. I need suggestions on what steps need to be taken for great performance. We'll be using ASP.Net(C#) Web forms, SQL Server, jQuery. Main concern is parsing of data, it's storage and retrieval with least overhead. For that I've summarized few points as below : Should we switch over to document-oriented database, like MongoDB or RavenDB for the whole app or part of it even though none of team member have experience with them? Should we use SQL Server Analysis service? Is there any other library than Json.NET for parsing data? Is it advisable to use any C# library over FQL + GET Request ? I've tried to provide as much info as possible. Please share your views for the same.

    Read the article

  • Design in "mixed" languages: object oriented design or functional programming?

    - by dema80
    In the past few years, the languages I like to use are becoming more and more "functional". I now use languages that are a sort of "hybrid": C#, F#, Scala. I like to design my application using classes that correspond to the domain objects, and use functional features where this makes coding easier, more coincise and safer (especially when operating on collections or when passing functions). However the two worlds "clash" when coming to design patterns. The specific example I faced recently is the Observer pattern. I want a producer to notify some other code (the "consumers/observers", say a DB storage, a logger, and so on) when an item is created or changed. I initially did it "functionally" like this: producer.foo(item => { updateItemInDb(item); insertLog(item) }) // calls the function passed as argument as an item is processed But I'm now wondering if I should use a more "OO" approach: interface IItemObserver { onNotify(Item) } class DBObserver : IItemObserver ... class LogObserver: IItemObserver ... producer.addObserver(new DBObserver) producer.addObserver(new LogObserver) producer.foo() //calls observer in a loop Which are the pro and con of the two approach? I once heard a FP guru say that design patterns are there only because of the limitations of the language, and that's why there are so few in functional languages. Maybe this could be an example of it? EDIT: In my particular scenario I don't need it, but.. how would you implement removal and addition of "observers" in the functional way? (I.e. how would you implement all the functionalities in the pattern?) Just passing a new function, for example?

    Read the article

  • What's wrong performing unit test against concrete implementation if your frameworks are not going to change?

    - by palm snow
    First a bit of background: We are re-architecting our product suite that was written 10 years ago and served its purpose. One thing that we cannot change is the database schema as we have 500+ client base using this system. Our db schema has over 150+ tables. We have decided on using Entity Framework 4.1 as DAL and still evaluating various frameworks for storing our business logic. I am investigation to bring unit testing into the mix but I also confused as to how far I need to go with setting up a full blown TDD environment. One aspect of setting up unit testing is by getting into implementing Repository, unit of work and mocking frameworks etc. This mean there will be cost and investment on the code-bloat associated with all these frameworks. I understand some of this could be auto-generated but when it comes to things like behaviors, that will be mostly hand written. Just to be clear, I am not questioning the important of unit testing your code. I am just not sure we need all its components (like repository, mocking etc.) when we are fairly certain of storage mechanism/framework (SQL Server/Entity Framework). All that code bloat with generic repositories make sense when you need a generic layers with ability to change this whenever you like however its very likely a YAGNI in our case. What we need is more of integration testing where we can unit-test our code with concrete repository objects and test data in database. In this scenario, just running integration test seem to be more beneficial in our case. Any thoughts if I am missing any thing here?

    Read the article

< Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >