Search Results

Search found 14145 results on 566 pages for 'level of detail'.

Page 211/566 | < Previous Page | 207 208 209 210 211 212 213 214 215 216 217 218  | Next Page >

  • Convert filenames to their checksum before saving to prevent duplicates. Is is a smart thing to do?

    - by Xananax
    TL;DR:what the title says I am developing some sort of image board in PHP. I was thinking of changing each image's filename to it's checksum prior to saving it. This way, I might be able to prevent duplicates. I know this wouldn't work for two images that are the same but differ in size or level of compression or whatnot, but this method would allow for an early check. What bugs me is that I never saw this method implemented anywhere, so I was wondering if there is a catch to it. Maybe it is just more efficient to keep the original filename and store the hash in DB? Maybe the whole method is just not useful and my question is moot? What do you think? On a side note, I don't really get how hashes are calculated so I was wondering, if my first question checks out, if it would be possible to calculate the likeness that two images are similar by comparing hashes (levenshtein or something of the sort).

    Read the article

  • What should be a fair amount of time in an interview before rejecting a candidate?

    - by Danish
    As a panelist for technical interviews, you often come across candidates who have all the requried educational qualifications, skill sets and experience level on resumes, but struggle to answer even the most basic questions. Ideally, technical interviews should try to check different aspects of a candidate and test them on various skills. So, if the candidate falters on one aspect, one should test the other ones before coming to a conclusion. But often, if a candidates falters on the first few questions, the red flag rises up pretty quickly. What in your opinion should be the bare minimum time spent with a candidate before making a fair accessment of his/her skills and suitability for the job?

    Read the article

  • Support / Maintenance documentation for development team

    - by benwebdev
    Hi, I'm working in the Development dept (around 40 developers) for a large E-Commerce company. We've grown quickly but have not evolved very well in the field of documenting our work. We work with an Agile / Scrum-like methodology with our development and testing but documentation seems to be neglected. We need to be able to make documentation that would aid a developer who hasnt worked on our project before or was new to the company. We also have to create more high level information for our support department to explain any extra config settings and fixes of known issues that may arise, if any. Currently we put this in a badly put together wiki, based on an old Sharepoint / TFS site. Can anyone suggest some ideal links or advice on improving the documentation standard? What works in other companies? Has anyone got avice on developing documentation as part of an agile process? Many thanks, ben

    Read the article

  • Integrating eBay and PayPal inventory

    - by JW01
    Say I have an item for sale on eBay, and the same item for sale on another site via PayPal. Is it possible to have sales on one site reflected in the inventory for the other site, and vice-versa? In other words, if I have ten items for sale, and I buy one on either site, it should show that there are nine items left on both sites. I know that PayPal has an API for setting the inventory level of an item associated with a button. eBay also has an API for controlling an item's inventory. I'm wondering if anyone has tried to integrate them.

    Read the article

  • What is the best free program to burn DVD movies from DVD movie files that are present on the hard drive the way ImgBurn does it in Windows?

    - by cipricus
    Evaluating burning programs is difficult: it takes more time, while errors are very unpleasant. I have looked at AcetoneISO, Brasero, and K3B (which many recommend) but they seem more limited than Nero, CDBurnXP and especially ImgBurn from Windows. They all seem to work (did not tested them fully myself) but at a more basic level (create ISO, burn them, etc.). When it comes to something like creating a DVD movie disk from DVD movie files present on the hard drive, it seems to me that I would have to create the ISO first and then burn it, which involves using more hard drive space and more time. Looking in the Windows direction, there is a Nero for Linux. It costs money. ImgBurn is said to work well in Wine, but as yet I have not tested it enough. What other options are there?

    Read the article

  • Approaching Java/JVM internals

    - by spinning_plate
    I've programmed in Java for about 8 years and I know the language quite well as a developer, but my goal is to deepen my knowledge of the internals. I've taken undergraduate courses in PL design, but they were very broad academic overviews (in Scheme, IIRC). Can someone suggest a route to start delving into the details? Specifically, are there particular topics (say, garbage collection) that might be more approachable or be a good starting point? Is there a decent high-level book on the internals of the JVM and the design of the Java programming language? My current approach is going to be to start with the JVM spec and research as needed.

    Read the article

  • Oracle Linux 6.3 has been released

    - by Lenz Grimmer
    We're happy to announce the availability of Oracle Linux 6.3, the third update release for Oracle Linux 6. ISO images can now be obtained from the Oracle Software Delivery Cloud, the individual RPM packages have already been published from our public yum repository. This distribution now includes the Unbreakable Enterprise Kernel Release 2 (2.6.39-200), Oracle's recommended kernel version for Oracle Linux. For further details, please see the Oracle Linux 6.3 Release Notes. Remember, Oracle Linux can be downloaded, used and distributed free of charge, updates and errata are freely available. For support, you are free to decide for which of your systems you want to obtain a support subscription, and at which level each of  them should be supported. This makes Oracle Linux an ideal choice for both your development and production systems - you decide which support coverage is the best for each of your systems individually, while keeping all of them up-to-date and secure. Wim Coekaerts recently wrote several blog posts about the benefits of Oracle Linux, which are worth a read: Oracle Linux components More Oracle Linux options My own personal use of Oracle Linux

    Read the article

  • General visual effects to meshes/entities

    - by Pacha
    I am trying to add some visual effects to some entities, meshes, or whatever you want to call them as they are looking pretty dull in my game right now. What I want to achieve is this: http://youtu.be/zox8935PLw0?t=36s (the "texture" gets disintegrated and then goes back to normal, covering the whole mesh.) Also I would like to know what is the best way to add effects like the one in the video to my game (for example, thunder effects, shattering, etc.) I know that I can do some things with shaders, but I haven't learned them too well and I am still in a beginner level. I am using Ogre3D, and GLSL for shaders. Thanks! Note: this is a screen-shot of my game, I want to apply the effect in the video to my main character):

    Read the article

  • Working with packed dates in SSIS

    - by Jim Giercyk
    One of the challenges recently thrown my way was to read an EBCDIC flat file, decode packed dates, and insert the dates into a SQL table.  For those unfamiliar with packed data, it is a way to store data at the nibble level (half a byte), and was often used by mainframe programmers to conserve storage space.  In the case of my input file, the dates were 2 bytes long and  represented the number of days that have past since 01/01/1950.  My first thought was, in the words of Scooby, Hmmmmph?  But, I love a good challenge, so I dove in. Reading in the flat file was rather simple.  The only difference between reading an EBCDIC and an ASCII file is the Code Page option in the connection manager.  In my case, I needed to use Code Page 1140 for EBCDIC (I could have also used Code Page 37).       Once the code page is set correctly, SSIS can understand what it is reading and it will convert the output to the default code page, 1252.  However, packed data is either unreadable or produces non-alphabetic characters, as we can see in the preview window.   Column 1 is actually the packed date, columns 0 and 2 are the values in the rest of the file.  We are only interested in Column 1, which is a 2 byte field representing a packed date.  We know that 2 bytes of packed data can be stored in 1 byte of character data, so we are working with 4 packed digits in 2 character bytes.  If you are confused, stay tuned….this will make sense in a minute.   Right-click on your Flat File Source shape and select “Show Advanced Editor”. Here is where the magic begins. By changing the properties of the output columns, we can access the packed digits from each byte. By default, the Output Column data type is DT_STR. Since we want to look at the bytes individually and not the entire string, change the data type to DT_BYTES. Next, and most important, set UseBinaryFormat to TRUE. This will write the HEX VALUES of the output string instead of writing the character values.  Now we are getting somewhere! Next, you will need to use a Data Conversion shape in your Data Flow to transform the 2 position byte stream to a 4 position Unicode string containing the packed data.  You need the string to be 4 bytes long because it will contain the 4 packed digits.  Here is what that should look like in the Data Conversion shape: Direct the output of your data flow to a test table or file to see the results.  In my case, I created a test table.  The results looked like this:     Hold on a second!  That doesn't look like a date at all.  No, of course not.  It is a hex number which represents the days which have passed between 01/01/1950 and the date.  We have to convert the Hex value to a decimal value, and use the DATEADD function to get a date value.  Luckily, I have created a function to convert Hex to Decimal:   -- ============================================= -- Author:        Jim Giercyk -- Create date: March, 2012 -- Description:    Converts a Hex string to a decimal value -- ============================================= CREATE FUNCTION [dbo].[ftn_HexToDec] (     @hexValue NVARCHAR(6) ) RETURNS DECIMAL AS BEGIN     -- Declare the return variable here DECLARE @decValue DECIMAL IF @hexValue LIKE '0x%' SET @hexValue = SUBSTRING(@hexValue,3,4) DECLARE @decTab TABLE ( decPos1 VARCHAR(2), decPos2 VARCHAR(2), decPos3 VARCHAR(2), decPos4 VARCHAR(2) ) DECLARE @pos1 VARCHAR(1) = SUBSTRING(@hexValue,1,1) DECLARE @pos2 VARCHAR(1) = SUBSTRING(@hexValue,2,1) DECLARE @pos3 VARCHAR(1) = SUBSTRING(@hexValue,3,1) DECLARE @pos4 VARCHAR(1) = SUBSTRING(@hexValue,4,1) INSERT @decTab VALUES (CASE               WHEN @pos1 = 'A' THEN '10'                 WHEN @pos1 = 'B' THEN '11'               WHEN @pos1 = 'C' THEN '12'               WHEN @pos1 = 'D' THEN '13'               WHEN @pos1 = 'E' THEN '14'               WHEN @pos1 = 'F' THEN '15'               ELSE @pos1              END, CASE               WHEN @pos2 = 'A' THEN '10'                 WHEN @pos2 = 'B' THEN '11'               WHEN @pos2 = 'C' THEN '12'               WHEN @pos2 = 'D' THEN '13'               WHEN @pos2 = 'E' THEN '14'               WHEN @pos2 = 'F' THEN '15'               ELSE @pos2              END, CASE               WHEN @pos3 = 'A' THEN '10'                 WHEN @pos3 = 'B' THEN '11'               WHEN @pos3 = 'C' THEN '12'               WHEN @pos3 = 'D' THEN '13'               WHEN @pos3 = 'E' THEN '14'               WHEN @pos3 = 'F' THEN '15'               ELSE @pos3              END, CASE               WHEN @pos4 = 'A' THEN '10'                 WHEN @pos4 = 'B' THEN '11'               WHEN @pos4 = 'C' THEN '12'               WHEN @pos4 = 'D' THEN '13'               WHEN @pos4 = 'E' THEN '14'               WHEN @pos4 = 'F' THEN '15'               ELSE @pos4              END) SET @decValue = (CONVERT(INT,(SELECT decPos4 FROM @decTab)))         +                 (CONVERT(INT,(SELECT decPos3 FROM @decTab))*16)      +                 (CONVERT(INT,(SELECT decPos2 FROM @decTab))*(16*16)) +                 (CONVERT(INT,(SELECT decPos1 FROM @decTab))*(16*16*16))     RETURN @decValue END GO     Making use of the function, I found the decimal conversion, added that number of days to 01/01/1950 and FINALLY arrived at my “unpacked relative date”.  Here is the query I used to retrieve the formatted date, and the result set which was returned: SELECT [packedDate] AS 'Hex Value',        dbo.ftn_HexToDec([packedDate]) AS 'Decimal Value',        CONVERT(DATE,DATEADD(day,dbo.ftn_HexToDec([packedDate]),'01/01/1950'),101) AS 'Relative String Date'   FROM [dbo].[Output Table]         This technique can be used any time you need to retrieve the hex value of a character string in SSIS.  The date example may be a bit difficult to understand at first, but with SSIS becoming the preferred tool for enterprise level integration for many companies, there is no doubt that developers will encounter these types of requirements with regularity in the future. Please feel free to contact me if you have any questions.

    Read the article

  • How can I unit test a class which requires a web service call?

    - by Chris Cooper
    I'm trying to test a class which calls some Hadoop web services. The code is pretty much of the form: method() { ...use Jersey client to create WebResource... ...make request... ...do something with response... } e.g. there is a create directory method, a create folder method etc. Given that the code is dealing with an external web service that I don't have control over, how can I unit test this? I could try and mock the web service client/responses but that breaks the guideline I've seen a lot recently: "Don't mock objects you don't own". I could set up a dummy web service implementation - would that still constitute a "unit test" or would it then be an integration test? Is it just not possible to unit test at this low a level - how would a TDD practitioner go about this?

    Read the article

  • English to French translation of computing terminology

    - by Rich
    I work in France as a Java programmer, mainly in French, but am a native English speaker. My level of French is pretty good (French wife!), but one thing I have problems with is working out whether to use English terminology or a French equivalent. Examples: lock (as in a synchronisation lock) - do I use the verb "locker" or do I use verrouiller? shard (databases) - "un shard" or "un tesson" (which means a shard of glass) ...and so-on... So, what do people recommend? Can anyone point out some good websites for translating this kind of terminology? The usual online translation tools are a bit too everyday English/French, not the slightly more specialised version that I find myself needing.

    Read the article

  • Determine percentage of screen covered by an object without using frustum culling

    - by Meltac
    On the CPU-side of an 3D first-person / ego perspective game I need to check whether what the players currently sees on screen is the inside of a box object defined by world space coordinates (the player might be outside of that box but on screen sees only/mostly the inside of the box, or vice-versa, looks from within the box to the outside). The "casual" way of performing such a check would incorporate frustum culling but such an approach would be hard to achieve with my given set of engine parameters which I'd like to avoid if there is a simpler way. What I actually have at the point where I would like to do the check (high-level script on CPU, not GPU side): Camera world position Camera direction Camera FOV Two Box corner world coordinates (left-bottom-front, right-top-back) What I do not have right away: View frustrum definition (near/far plane or say 6 planes defining frustum) Any specific pixel information (uv, view space position, depth or the like) What I would like to calculate: Percentage of screen "covered" by box. Any hints on how to perform such calculation?

    Read the article

  • Ubunti 12.10 wake on lan not working with Realtek 8139

    - by f.cipriani
    My pc doesn't wake up when receiving a magic packet from a pc connected to the same router. ethtool: fcipriani@ubuntu:~$ sudo ethtool eth0 Settings for eth0: Supported ports: [ TP MII ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full Supported pause frame use: No Supports auto-negotiation: Yes Advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full Advertised pause frame use: No Advertised auto-negotiation: Yes Link partner advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full Link partner advertised pause frame use: No Link partner advertised auto-negotiation: Yes Speed: 100Mb/s Duplex: Full Port: MII PHYAD: 32 Transceiver: internal Auto-negotiation: on Supports Wake-on: pumbg Wake-on: g Current message level: 0x00000007 (7) drv probe link Link detected: yes I have enabled all the wake up features in my bios, and I have verified the magic packet gets to the pc. I suspect the main problem is that the NIC light is completely turned off after the shutdown, but even after spending a lot of time researching I can't understand if this is a limit of my network card, my mobo, or something in the OS which needs to be configured correctly in order to leave the NIC in stand by mode with the light flashing. the NIC is Realtek 8139, the motherboard Asus P5L13L-X

    Read the article

  • Can a programmer get too smart for their own good?

    - by P.Brian.Mackey
    The more I learn about programming, the more things I see that could be improved by a great deal. Often, a companies process management is total SWAG or they have Frames based websites written recently, .NET 1.1 based code, no separation of concerns, poor quality control...I could go on and on and on... Projects can succeed, but there tends to be so much waste I am amazed at how much time and money a company can throw away. I've seen it happen at several companies. So is it that ignorance truly is bliss? UPDATE Question "How is it that top developers (I don't mean like Jon Skeet level, I mean guys who are dedicated enough to hit a forum and try for self-improvement) even want to code anymore after they see the often insurmountable sociological and technical problems they are told to fix, but then scolded for doing so? "

    Read the article

  • Access Services in SharePoint Server 2010

    - by Wayne
    Another SharePoint Server 2010 feature which cannot go unnoticed is the Access Services. Access Services is a service in SharePoint Server 2010 that allows administrators to view, edit, and configure a Microsoft access application within a Web Browser. Access Services settings support backup and recovery, regardless of whether there is a UI setting in Central Administration. However, backup and recovery only apply to service-level and administrative-level settings; end-user content from the Access application is not backed up as part of this process. Access Services has Windows PowerShell functionality that can be used to provide the service that uses settings from a previous backup; configure and manage macro and query setting; manage and configure session management; and configure all the global settings of the service. Key Benefits of SharePoint Server Access Services Easier Access to right tools: The enhanced, customizable Ribbon in Access 2010 makes it easy to uncover more commands so you can focus on the end product. The new Microsoft Office BackstageTM view is yet another feature that can help you easily analyze and document your database, share, publish, and customize your Access 2010 experience, all from one convenient location. Helps build database effortlessly and quickly: Out-of-the box templates and reusable components make Access Services the fastest, simplest database solution available. It helps find new pre-built templates which you can start using without customization or select templates created by your peers in the Access online community and customize them to meet your needs. It builds your databases with new modular components. New Application Parts enable you to add a set of common Access components, such as a table and form for task management, to your database in a few simple clicks. Database navigation is now simplified. It creates Navigation Forms and makes your frequently used forms and reports more accessible without writing any code or logic. Create Impactful forms and reports: Whether it's an inventory of your assets or customer sales database, Access 2010 brings the innovative tools you'd expect from Microsoft Office. Access Services easily spot trends and add emphasis to your data. It quickly create coordinating database forms and reports and bring the Web into your database. Obtain a centralized landing pad for your data: Access 2010 offers easy ways to bring your data together and help increase work quality. New technologies help break down barriers so you can share and work together on your databases, making you or your team more efficient and productive. Add automation and complex expressions: If you need a more robust database design, such as preventing record deletion if a specific condition is met or if you need to create calculations to forecast your budget, Access 2010 empowers you to be your own developer. The enhanced Expression Builder greatly simplifies your expression building experience with IntelliSense®. With the revamped Macro Designer, it's now even easier for you to add basic logic to your database. New Data Macros allow you to attach logic to your data, centralizing the logic on the table, not the objects that update your data. Key features of Access Services 2010 - Access database content through a Web browser: Newly added Access Services on Microsoft SharePoint Server 2010 enables you to make your databases available on the Web with new Web databases. Users without an Access client can open Web forms and reports via a browser and changes are automatically synchronized. - Simplify how you access the features you need: The Ribbon, improved in Access 2010, helps you access commands even more quickly by enabling you to customize or create your own tabs. The new Microsoft Office Backstage view replaces the traditional File menu to provide one central, organized location for all of your document management tasks. - Codeless navigation: Use professional looking web-like navigation forms to make frequently used forms and reports more accessible without writing any code or logic. - Easily reuse Access items in other databases: Use Application Parts to add pre-built Access components for common tasks to your database in a few simple clicks. You can also package common database components, such as data entry forms and reports for task management, and reuse them across your organization or other databases. - Simplified formatting: By using Office themes you can create coordinating professional forms and reports across your database. Simply select a familiar and great looking Office theme, or design your own, and apply it to your database. Newly created Access objects will automatically match your chosen theme.

    Read the article

  • PASS: 2013 Summit Location

    - by Bill Graziano
    HQ recently posted a brief update on our search for a location for 2013.  It includes links to posts by four Board members and two community members. I’d like to add my thoughts to the mix and ask you a question.  But I can’t give you a real understanding without telling you some history first. So far we’ve had the Summit in Chicago, San Francisco, Orlando, Dallas, Denver and Seattle.  Each has a little different feel and distinct memories.  I enjoyed getting drinks by the pool in Orlando after the sessions ended.  I didn’t like that our location in Dallas was so far away from all the nightlife.  Denver was in downtown but we had real challenges with hotels.  I enjoyed the different locations.  I always enjoyed the announcement during the third keynote with the location of the next Summit. There are two big events that impacted my thinking on the Summit location.  The first was our transition to the new management company in early 2007.  The event that September in Denver was put on with a six month planning cycle by a brand new headquarters staff.  It wasn’t perfect but came off much better than I had dared to hope.  It also moved us out of the cookie cutter conferences that we used to do into a model where we have a lot more control.  I think you’ll all agree that the production values of our last few Summits have been fantastic.  That Summit also led to our changing relationship with Microsoft.  Microsoft holds two seats on the PASS Board.  All the PASS Board members face the same challenge: we all have full-time jobs and PASS comes in second place professionally (or sometimes further back).  Starting in 2008 we were assigned a liaison from Microsoft that had a much larger block of time to coordinate with us.  That changed everything between PASS and Microsoft.  Suddenly we were talking to product marketing, Microsoft PR, their event team, the Tech*Ed team, the education division, their user group team and their field sales team – locally and internationally.  We strengthened our relationship with CSS, SQLCAT and the engineering teams.  We had exposure at the executive level that we’d never had before.  And their level of participation at the Summit changed from under 100 people to 400-500 people.  I think those 400+ Microsoft employees have value at a conference on Microsoft SQL Server.  For the first time, Seattle had a real competitive advantage over other cities. I’m one that looked very hard at staying in Seattle for a long, long time.  I think those Microsoft engineers have value to our attendees.  I think the increased support that Microsoft can provide when we’re in Seattle has value to our attendees.  But that doesn’t tell the whole story.  There’s a significant (and vocal!) percentage of our membership that wants the Summit outside Seattle.  Post-2007 PASS doesn’t know what it’s like to have a Summit outside of Seattle.  I think until we have a Summit in another city we won’t really know the trade-offs. I think a model where we move every third or every other year is interesting.  But until we have another Summit outside Seattle and we can evaluate the logistics and how important it is to have depth and variety in our Microsoft participation we won’t really know. Another benefit that comes with a move is variety or diversity.  I learn more when I’m exposed to new things and new people.  I believe that moving the Summit will give a different set of people an opportunity to attend. Grant Fritchey writes “It seems that the board is leaning, extremely heavily, towards making it a permanent fixture in Seattle.”  I don’t believe that’s true.  I know there was discussion of that earlier but I don’t believe it’s true now. And that brings me to my question.  Do we announce the city now or do we wait until the 2012 Summit?  I’m happy to announce Seattle vs. not-Seattle as soon as we sign the contract.  But I’d like to leave the actual city announcement until the 2011 Summit.  I like the drama and mystery of it.  I also like that it doesn’t give you a reason to skip a Summit and wait for the next one if it’s closer or back in Seattle.  The other side of the coin is that your planning is easier if you know where it is.  What do you think?

    Read the article

  • Empirical Evidence of Popularity of Git and Mercurial

    - by ana
    It's 2012! Mercurial and Git are both still strong. I understand the trade-offs of both. I also understand everyone has some sort of preference for one or the other. That's fine. I'm looking for some information on level of usage of both. For example, on stackoverflow.com, searching for Git gets you 12000 hits, Mercurial gets you 3000. Google Trends says it's 1.9:1.0 for Git. What other empirical information is available to estimate the relative usage of both tools?

    Read the article

  • Need Help Changing Owner of External HArd Drive

    - by Thomas Ballew
    My understanding of code is about zero. I can open a terminal window, and type commands that are given to me, but that's about it. If someone can help me with this question, and explain at a level I'm likely to understand, thanks. If not, thanks anyway. I have an external hard drive with two partitions. I bought this drive when my operating system was Apple, 10.5 or so, and it was formatted as HFS+ with that system. Now, connecting the HD to my Linux system, I can read files, but I have about 1.5 TB of space that I can't use, because I am not the owner of the file, so can't write to the HD. Short of reformatting the HD, is there a way for me to set the permissions for the HD so I can write to it? Again, thank you.

    Read the article

  • The Grand Unified Framework Theory

    Tom Janssens left a comment: What still bugs me is that we do not have a unified pattern for all .net dev (using modelbinders and icommand for example...) But, Tom we are pretty close. At least as close as we should be, I think. With .NET there are plenty of low level patterns we can reuse regardless of the application platform or architecture. Stuff like: Asynchronous programming with events or the TPL. Object queries with LINQ. Resource management with IDispose. At a higher...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Which version of Java should I use for learning?

    - by Dan
    I am a QA engineer interested in mobile development and automation.I have basic programming experience (intermediate level Python, C++ programmer) and as most companies choose Java for writing frameworks I need to pickup Java. I use Ubutnu 12.04 LTS and I will be using Head First Java as learning material. When I searched for JDK options I found Oracle Java 6 and 7 and Open JDK. I read somewhere in Ubuntu forums that Java 6 is not recommended on Ubuntu systems and I am a little bit confused about which version should I use, that would be compatible with the book and the OS.

    Read the article

  • Aser Aspire 3690 WIreless Not working, new Ubuntu user

    - by drew
    I don't know that much about Ubuntu, but I put it on an Acer Aspire 3690 Laptop (replaced Win Vista). Everything seems to work fine, but the wireless connection. It says "FIRMWARE MISSING". It detects that I have a wireless card, so I don't know what to do. I've already used the "ADDITIONAL DRIVERS" thing, and that didn't work. I've googled and have not found anything helpful. I don't really know what I'm doing, so can your directions please be kindergarden level? Thank you very much for any help you can provide. I really appreciate it.

    Read the article

  • What programming Language Would you learn to Re-engineer USB Devices? [closed]

    - by user70113
    Currently Work in IT support and am retraining in electrical engineering / electronics, I am also interested in Reverse Engineering which language would be best for Hardware RE, I have seen a few sources say C, C++ and Python? I am not familiar with Linux, but installed Ubuntu to learn with. I am not a programmer. Far from it. But, I can understand enough basic VB,Java and PHP to edit it for simple things. One of my immediate projects would be to learn to reverse engineer USB devices and write my own low level drivers. I know there are porting kits, but I really want to know it from the ground up. Thanks for any advise folks Most Appreciated.

    Read the article

  • How to learn programming for a medium scale project form a beginner? [closed]

    - by Lin Xiangyu
    I study programming by myself.I have learn servel programming languages. but I never write a project more than 1000 lines. I know the best way to improve programming skills is practise. The problem is many books, just talk about the programming language, or talk about build a project from a high level. Fews of books will teach how to build a middle scale project. For example, I want to build a simple HTTP Server(Nor like Apache or just a simple listenr to a port), a Markdown Parser, or a download tools just like emule or wget. I don't know what to do. I may found peaces of code in the web, or found familiar project in the Github. I don't know how to read the code. I want to some tutorial that can told me how to build the project step by step, teacher me how to write thousands lines of code. Any suggest?

    Read the article

  • Oracle GoldenGate 12c - Leading Enterprise Replication

    - by Doug Reid
    Oracle GoldenGate 12c released  on October 17th and includes several new cutting edge features that firmly establishes GoldenGate's leader position in the data replication space.   In fact, this release more than doubles the performance of data delivery, supports Oracle's new multitenant database feature,  it's more secure, has more options for high availability, and has made great strides to simplify the configuration and deployment of the product.     Read through the press release if you haven't already and do not miss the quote from Cern's Eva Dafonte Perez, regarding Oracle GoldenGate 12c "….performs five times faster compared to previous GoldenGate versions and simplifies the management of a multi-tier environment" There are a variety of new and improved features in the Oracle GoldenGate 12c.  Here are the highlights: Optimized for Oracle Database 12c -  GoldenGate 12c is custom tailored to the unique capabilities of Oracle database 12c and out of the box GoldenGate 12c supports multitenant (pluggable database (PDB)) and non-consolidated deployments of Oracle Database 12c.   The naming convention used by database 12c is now in three parts (PDB-name, schema-name, and object name).  We have made changes to the GoldenGate capture process to support the new naming convention and streamlined the whole process so a single GoldenGate capture process is being used at the container level rather than at each individual PDB.  By having the capture process at the container level resource usage and the number of processes are reduced. To view a conceptual architecture diagram click here. Integrated Delivery for the Oracle Database - Leveraging a lightweight streaming API built exclusively for Oracle GoldenGate 12c, this process distributes load, auto tunes the degree of parallelism, scales better, and delivers blinding rates of changed data delivery to the Oracle database.  One of the goals for Oracle GoldenGate 12c was to reduce IT costs by simplifying the configuration and reduce the time to manage complex infrastructures.  In previous versions of Oracle GoldenGate, customers would split transaction loads by grouping tables into multiple different delivery processes (click here to view the previous method). Each delivery process executed independently and without any interaction or knowledge of other delivery processes.  This setup was complicated to configure and time consuming as the developer needed in-depth knowledge of the source and target schemas and the transaction profile. With GoldenGate 12c and Integrated Delivery we have made it easier to configure and faster to deploy.  To view a conceptual architecture diagram of integrated delivery click here Coordinated Delivery for Non-Oracle Databases - Coordinated Delivery orchestrates high-speed apply processes and simplifies the configuration of GoldenGate for non-Oracle targets. In Oracle GoldenGate 12c a single delivery process is used with multiple threads (click here) and key events, such as primary key updates, event markers, DDL, etc, are coordinated between the various threads to insure that the transactions are applied in the same sequence as they were captured, all while delivery improved performance.  Replication Between On-Premises and Cloud-Based systems. - The trend for business to utilize both on-premises and cloud-based systems is rising and businesses need to replicate data back and forth.   GoldenGate 12c can be configured in a variety of ways to provide real-time replication when unrestricted or restricted (limited ports or HTTP tunneling) networks are between on-premises and cloud-based systems.    Expanded Heterogeneity - It wouldn't be a GoldenGate release without new and improved platform support.   Release 1 includes support for MySQL 5.6 and Sybase 15.7.   Upcoming in the next release GoldenGate, support will be expanded for MS SQL Server, DB2, and Teradata. Tighter Security - Oracle GoldenGate 12c is integrated with the Oracle wallet to shield usernames and passwords using strong encryption and aliases.   Customers accustomed to using the Oracle Wallet with other Oracle products will instantly be familiar with how to use this great new feature Expanded Oracle Application and Technology Support -   GoldenGate can be used along with Oracle Coherence to enable real-time changed data feeds to the Coherence cache using Toplink and the Oracle GoldenGate JMS adapter.     Plus,  Oracle Advanced Customer Services (ACS) now offers a low downtime E-Business Suite platform and database migrations using GoldenGate as the enabling technology.  Keep tuned for more blogs on the new features and the upcoming launch webcast where we will go into these new features in more detail.   In the mean time make sure to read through our white paper "Oracle GoldenGate 12c Release 1 New Features Overview"

    Read the article

  • OWB 11.2.0.2: Managing Use of Optional OWB Features

    - by antonio romero
    Most OWB users know that parts of Warehouse Builder are covered with the database license and others require additional options (such as the Oracle Data Integrator Enterprise Edition license). Warehouse Builder 11.2.0.2 adds the ability to disable optional feature groups. This lets you avoid the inadvertent use of most licensed features at the repository level.  This capability is accessed through the 11.2.0.2 Repository Assistant. We’ll look at the basics here. There’s also a new whitepaper that details which features are in the different feature groups associated with licenses. Read on to find out more. In Repository Assistant in 11.2.0.2, in Step 2 (“Choose Operation”), you will see a new task, “Manage Optional Features.” This is where you choose which features to enable or disable.

    Read the article

< Previous Page | 207 208 209 210 211 212 213 214 215 216 217 218  | Next Page >