Search Results

Search found 9233 results on 370 pages for 'high school'.

Page 233/370 | < Previous Page | 229 230 231 232 233 234 235 236 237 238 239 240  | Next Page >

  • USB flash module giving errors

    - by vshenoy
    Hi, I have a SATA USB flash module which was earlier running a 2.4 linux kernel (2.4.36.6) and on which now I am trying to install ubuntu server 10.04.1 LTS. I have two such USB flash modules and on one of them the installation process itself giving these errors: sd 4:0:0:0 [sda] Device not ready sd 4:0:0:0 [sda] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE sd 4:0:0:0 [sda] Sense Key : Not Ready [current] sd 4:0:0:0 [sda] Add. Sense: Medium not present sd 4:0:0:0 [sda] CDB: Write(10): 2a 00 00 05 48 02 00 00 04 00 end_request: I/O error, dev sda, sector 46114 usb 1-1: reset high speed USB device using ehci_hcd and address 2 Buffer I/O error on device sda1, logical block 172033 lost page write due to I/O error on sda1 Buffer I/O error on device sda1, logical block 172034 lost page write due to I/O error on sda1 on the other the installation is successful, but after a day or two of running the machine hangs because of kernel spewing these messages: Remounting filesystem read-only EXT2-fs error (device sda1): read_block_bitmap: Cannot read block [bitmap - block_group = 105, block_bitmap = 860161] EXT2-fs error (device sda1): ext2_get_inode: unable to read inode block - inode=13083, block=24683 ext2_free_inode: bit already cleared for inode 83966 and the machine needs to be hard rebooted. On both the systems SCSI emulation with usb_storage driver is being used to detect the module. Here is the output of /proc/scsi/scsi on 2.4: # cat /proc/scsi/scsi Attached devices: Host: scsi0 Channel: 00 Id: 00 Lun: 00 Vendor: TS Model: UFM Rev: 1100 Type: Direct-Access ANSI SCSI revision: 02 and on 2.6: # cat /proc/scsi/scsi Attached devices: Host: scsi6 Channel: 00 Id: 00 Lun: 00 Vendor: TS Model: UFM Rev: 1100 Type: Direct-Access ANSI SCSI revision: 00 i.e. only 'ANSI SCSI revision:' is shown as different, although I am not sure if this can cause any problem. Really appreciate if someone can point as to how to debug this issue or any mailing list where I can further ask questions about this.

    Read the article

  • To Virtual or Not to Virtual

    - by Kevin Shyr
    I recently made a comment "I hate everything virtual" while responding to a SQL server performance question.  I then promptly fired up my Hyper-V development environment to do my proof of concept stuff, and realized that I made the cardinal sin of making a generalized comment about something, instead of saying "It depends". The bottom line is if the virtual environment gives the throughput that the server needs, then it is not that big of a deal.  I just have seen so many environment set up with SQL server sitting in virtual environment sitting in a SAN, so on top of having to plan for loss data, I now have to plan for my virtual environment failing for so many different reasons, thought SQL 2012 High Availability Group should make that easier.  To me, a virtual environment makes sense for a stateless application with big scalibility requirement, but doesn't give much benefit to an application where performance and data integrity are both important.  If security is not a concern, I would just build servers with multiple instances on them to balance the workload. Maybe this is also too generalized a comment, and I'll confess that I'm not a DBA by trade.  I'd love to hear the pros and cons of virtualizing a SQL server, or other examples where virtualization makes total sense (not just money, but recovery, rollback, etc.)

    Read the article

  • The Talent Behind Customer Experience

    - by Christina McKeon
    Earlier, I wrote about Powerful Data Lessons from the Presidential Election. A key component of the Obama team’s data analysis deserves its own discussion—the people. Recruiters are probably scrambling to find out who those Obama data crunchers are and lure them into corporations. For the Obama team, these data scientists became a secret ingredient that the competition didn’t have. This team of analysts knew how to hear the signal and ignore the noise, how to segment and target its base, and how to model scenarios and revise plans based on what the data told them. The talent was the difference. As you work to transform your organization to be more customer-centric, don’t forget that talent is a critical element. Journey mapping is a good start to understanding how your talent impacts your customer experiences. Part of journey mapping includes documenting the “on-stage” and “back-stage” systems and touchpoints. When mapping this part of your customers’ journey, include the roles and talent behind the employee actions—both customer facing and further upstream from that customer touchpoint. Know what each of these roles does, how well you are retaining people in these areas, and your plans to fill these open positions in the future. To use data scientists as an example, this job will be in high demand over the next 10 years. The workforce is shrinking, and higher education institutions may not be able to turn out trained data scientists as fast as you need them. You don’t want to be caught with a skills deficit, so consider how you can best plan for the future talent you will need. Have your existing employees make their career aspirations known to you now. You may find you already have employees willing to take on roles that drive better customer experiences. Then develop customer experience talent from within your organization through targeted learning programs. If you know that you will need to go outside the organization, build those candidate relationships now. Nurture the candidates you want to hire and partner with universities, colleges, and trade associations so you can increase the number of qualified candidates in your talent pool.

    Read the article

  • Why is math taught "backwards"? [closed]

    - by Yorirou
    A friend of mine showed me a pretty practical Java example. It was a riddle. I got excited and quickly solved the problem. After it, he showed me the mathematical explanation of my solution (he proved why is it good), and it was completely clear for me. This seems like natural approach for me: solve problems, and generalize. This is very familiar to me, I do it all the time when I am programming: I write a function. When I have to write a similar function, I generalize the problem, grab the generic parts, and refactor them to a function, and solve the original problems as a specialization of the general function. At the university (or at least where I study), things work backwards. The professors shows just the highest possible level of the solutions ("cryptic" mathematical formulas). My problem is that this is too abstract for me. There is no connection of my previous knowledge (== reality in my sense), so even if I can understand it, I can't really learn it properly. Others are learning these formulas word-by-word, and get good grades, since they can write exactly the same to the test, but this is not an option for me. I am a curious person, I can learn interesting things, but I can't learn just text. My brain is for storing toughts, not strings. There are proofs for the theories, but they are also really hard to understand because of this, and in most of the cases they are omitted. What is the reason for this? I don't understand why is it a good idea to show the really high level of abstraction and then leave the practical connections (or some important ideas / practical motivations) out?

    Read the article

  • The Start of a Blog

    - by dbradley
    So, here's my new blog up and running, who am I and what am I planning to write here?First off - here's a little about me:I'm a recent graduate from university (coming up to a year ago since I finished) studying Software Engineering on a four year course where the third year was an industrial placement. During the industrial placement I went to work for a company called Adfero in a "Technical Consultant" role as well as a junior "Information Systems Developer". Once I completed my placement I went back to complete my final year but also continued in my developer role 2/3 days a week with the company.Working part time while at uni always seems like a great idea until you get half way through the year. For me the problem was not so much having a lack of time, but rather a lack of interest in the course content having got a chance at working on real projects in a live environment. Most people who have been graduated a little while also find this - when looking back at uni work, it seem to be much more trivial from a problem solving point of view which I found to be true and I found key to uni work to actually be your ability to prove though how you talk about something that you comprehensively understand the basics.After completing uni I then returned full time to Adfero purely in the developer role which is where I've now been for almost a year and have now also taken on the title of "Information Systems Architect" where I'm working on some of the more high level design problems within the products.What I'm wanting to share on this blog is some of the interesting things I've learnt myself over the last year, the things they don't teach you in uni and pretty much anything else I find interesting! My personal favorite areas are text indexing, search and particularly good software engineering design - good design combined with good code makes the first step towards a well-written, maintainable piece of software.Hopefully I'll also be able to share a few of the products I've worked on, the mistake I've made and the software problems I've inherited from previous developers and had to heavily re-factor.

    Read the article

  • What's the best way to manage reusable classes/libraries separately?

    - by Tom
    When coding, I naturally often come up with classes or a set of classes with a high reusability. I'm looking for an easy, straight-forward way to work on them separately. I'd like to be able to easily integrate them into any project; it also should be possible to switch to a different version with as few commands as possible. Am I right with the assumption that git (or another VCS) is best suited for this? I thought of setting up local repositories for each class/project/library/plugin and then just cloning/pulling them. It would be great if I could reference those projects by name, not by the full path. Like git clone someproject. edit: To clarify, I know what VCS are about and I do use them. I'm just looking for a comfortable way to store and edit some reusable pieces of code (including unit tests) separately and to be able to include them (without the unit tests) in other projects, without having to manually copy files. Apache Maven is a good example, but I'm looking for a language-independent solution, optimally command-line-based.

    Read the article

  • What actions to take when people leave the team?

    - by finrod
    Recently one of our key engineers resigned. This engineer has co-authored a major component of our application. We are not hitting Truck number yet though, but we're getting close :) Before the guy waltzes off, we want to take actions necessary to recover from this loss as smoothly as possible and eventually 'grow' the rest of the team to competently cover the parts he authored. More about the context: the domain the component covers and the code are no rocket science but still a lot of non-trivial stuff. Some team members can already cover a lot of this but those have a lot on their plates and we want to make sure every. (as I see it): Improve tests and test coverage - especially for the non-trivial stuff, Update high level documents, Document any 'funny stuff' the code does (we had to do some heavy duct-taping), Add / update code documentation - have everything with 'public' visibility documented. Finally the questions: What do you think are the actions to take in this situation? What have you done in such situations? What did or did not work well for you?

    Read the article

  • Partner Webcast – Oracle Weblogic 12c for New Projects - 07 Nov 2013

    - by Thanos Terentes Printzios
    Fast-growing organizations need to stay agile in the face of changing customer, business or market requirements. Oracle WebLogic Server 12c is the industry's best application server platform that allows you to quickly develop and deploy reliable, secure, scalable and manageable enterprise Java EE applications.WebLogic Server Java EE applications are based on standardized, modular components. WebLogic Server provides a complete set of services for those modules and handles many details of application behavior automatically, without requiring programming. New project applications are created by Java programmers, Web designers, and application assemblers. Programmers and designers create modules that implement the business and presentation logic for the application. Application assemblers assemble the modules into applications that are ready to deploy on WebLogic Server. Build and run high-performance enterprise applications and services with Oracle WebLogic Server 12c, available in three editions to meet the needs of traditional and cloud IT environments. Join us, in this webcast, as we will show you how WebLogic Server 12c helps you building and deployingenterprise Java EE applications with support for new features for lowering cost of operations, improving performance, enhancing scalability. Agenda Oracle WebLogic Server Introduction Application Development on WebLogic Using Java EE Overview of the Application Deployment Process Monitoring Application Performance Q&A November 07th, 2013 -  9am UTC/11am EET Delivery FormatThis FREE online LIVE eSeminar will be delivered over the Web. Registrations received less than 24hours prior to start time may not receive confirmation to attend. Duration: 1 hour REGISTER NOW For any questions please contact us at partner.imc-AT-beehiveonline.oracle-DOT-com Stay Connected Oracle Newsletters

    Read the article

  • Employers and intellectual property 2

    - by Rick
    I have a question about intellectual property, I am currently a manager in a small manufacturing firm. The owners are driven by greed and don't appreciate the development process of complex machinery and are happy just to send things out half done. I on the other hand think that it should be done properly as breakdown in the field can be costly, embarrassing. They seem to have all of us running around doing most of the work out of hours using the attitude of "Be grateful to have a job" yet no one has a contract or any security or any agreement in place. For a couple of the projects i am using PLC's and doing the code in my own time and the testing during company time, and i am aware that they cannot support their own machines if i left, but as i created the code in my own time who owns it? The have asked my to put in a shutdown code for a maintenance request after a given length of time, could this be classed as criminal damage or anything illegal apart from immoral? (we sell the machines with 12 month warrantee, shut down after) But as time goes on I'm getting rather fed up of the companies attitude toward the client. I am considering keeping the clients as my own and get them to contact me directly In the shutdown code. By doing something like this is a trial version contact me for a full license? I wouldn't feel bad for my current employer as he is not afraid to S***t on people as he has been evolved in numerous law suits and has over 30 failed companies leaving people and customers high and dry, we have took the company this far on the reputation of the workers and and i can see things heading like all the other companies he has owned and taking our reputations with him. So i suppose now i have set the scene, if i code into it to contact me directly in the shutdown could there be any legal impact on me, as i rightly or wrongly think i own the code and designs? Cheers R

    Read the article

  • Clouds Aroud the World

    - by user12608550
    At the NIST Cloud Computing Workshop this week; representatives from Canada, China, and Japan presented on their cloud computing efforts. Some interesting points made: Canada: Building "Service Canada" cloud for all citizen services, but raised the issue of data location...cloud data must be within Canada border, so they will not focus on public clouds where they don't know or can't control data location. Japan: In response to the massive destruction of the Great East Japan Earthquake, Japan is building nation-wide cloud services to support disaster relief, data recovery, and support for rebuilding new communities. US Ambassador Philip Verveer discussed the need for international cooperation and standards development to enable interoperability of cloud services, keeping in mind cultural and political differences. Additionally, an industry panel reported on cloud standards development, including some actual interoperability testing at http://www.cloudplugfest.org. Much of the first two days of the workshop covered progress and action plans around the 10 High-Priority Requirements to Further USG Agency Cloud Computing Adoption. Thursday's sessions will cover the work of the various NIST Cloud Computing Working Groups on Reference Architecture and Taxonomy Standards Acceleration to Jumpstart the Adoption of Cloud Computing (SAJACC) Cloud Security Standards Roadmap Business Use Cases (see Working Groups of NIST Cloud Computing )

    Read the article

  • Looking for a better Factory pattern (Java)

    - by Sam Goldberg
    After doing a rough sketch of a high level object model, I am doing iterative TDD, and letting the other objects emerge as a refactoring of the code (as it increases in complexity). (That whole approach may be a discussion/argument for another day.) In any case, I am at the point where I am looking to refactor code blocks currently in an if-else blocks into separate objects. This is because there is another another value combination which creates new set of logical sub-branches. To be more specific, this is a trading system feature, where buy orders have different behavior than sell orders. Responses to the orders have a numeric indicator field which describes some event that occurred (e.g. fill, cancel). The combination of this numeric indicator field plus whether it is a buy or sell, require different processing buy the code. Creating a family of objects to separate the code for the unique handling each of the combinations of the 2 fields seems like a good choice at this point. The way I would normally do this, is to create some Factory object which when called with the 2 relevant parameters (indicator, buysell), would return the correct subclass of the object. Some times I do this pattern with a map, which allows to look up a live instance (or constructor to use via reflection), and sometimes I just hard code the cases in the Factory class. So - for some reason this feels like not good design (e.g. one object which knows all the subclasses of an interface or parent object), and a bit clumsy. Is there a better pattern for solving this kind of problem? And if this factory method approach makes sense, can anyone suggest a nicer design?

    Read the article

  • Dedicated Servers: Is one better then two for LAMP pseudo HA setup? [closed]

    - by bikedorkseattle
    Possible Duplicate: How to find web hosting that meets my requirements? I know there are zillions of commentary about hosting out there, but I haven't read much about this. Our current well known host is having too many problems, the hardware we are on it subpar, and I'm ready to leave. A day of downtime can cost as much as our monthly hosting bill. A month of bad performance is just killing us right now, user and google wise. I'm wondering about running two dedicated boxes for LAMP, one running as the primary Nginx/Apache (proxy pass), and the other as the MySQL box. Running a single box scares the bejesus out of me because who knows how long it will take anyone to fix a raid card or whatever. The idea is to set this up using some sort of failover system using pacemaker and heartbeat. If one server goes down the other can take over for the other running both web and db. There are some good articles over at Linode about this. I have a few DBs that are 1GB+ and would like to load them into memory. Because of this, I'm shying away from a Linode HA setup because for the price I could do it with two dedicated like I described. Am I mad or an idiot? What are people out there doing for pseodu high availability good performance setups under $400/month? I'm a webmaster; I do a lot of things none of it that well :)

    Read the article

  • Copy to USB memory stick really slow?

    - by Eloff
    When I copy files to the USB device, it takes much longer than in windows (same usb device, same port) it's faster than USB 1.0 speeds (1MB/s) but much slower than USB 2.0 speeds (12MB/s). To copy 1.8GB takes me over 10 minutes (it should be < 3 min.) I have two identical SanDisk Cruzer 8GB sticks, and I have the same problem with both. I have a super talent 32GB USB SSD in the neighboring port and it works at expected speeds. The problem I seem to see in the GUI is that the progress bar goes to 90% almost instantly, completes to 100% a little slower and then hangs there for 10 minutes. Interrupting the copy at this point seems to result in corruption at the tail end of the file. If I wait for it to complete the copy is successful. Any ideas? dmesg output below: [64059.432309] usb 2-1.2: new high-speed USB device number 5 using ehci_hcd [64059.526419] scsi8 : usb-storage 2-1.2:1.0 [64060.529071] scsi 8:0:0:0: Direct-Access SanDisk Cruzer 1.14 PQ: 0 ANSI: 2 [64060.530834] sd 8:0:0:0: Attached scsi generic sg4 type 0 [64060.531925] sd 8:0:0:0: [sdd] 15633408 512-byte logical blocks: (8.00 GB/7.45 GiB) [64060.533419] sd 8:0:0:0: [sdd] Write Protect is off [64060.533428] sd 8:0:0:0: [sdd] Mode Sense: 03 00 00 00 [64060.534319] sd 8:0:0:0: [sdd] No Caching mode page present [64060.534327] sd 8:0:0:0: [sdd] Assuming drive cache: write through [64060.537988] sd 8:0:0:0: [sdd] No Caching mode page present [64060.537995] sd 8:0:0:0: [sdd] Assuming drive cache: write through [64060.541290] sdd: sdd1 [64060.544617] sd 8:0:0:0: [sdd] No Caching mode page present [64060.544619] sd 8:0:0:0: [sdd] Assuming drive cache: write through [64060.544621] sd 8:0:0:0: [sdd] Attached SCSI removable disk

    Read the article

  • Internet slow on one router only [the problem only in Ubuntu] [on hold]

    - by mrSuperEvening
    Internet works perfectly on every other router, but browsing sucks at home (slow browsing and slow loading times). I changed DNS servers to 8.8.0.0, still doesn't help. And funnily, download speed is extremely high on this network (meaning torrents for example), but using browsers and loading websites is extremely slow (only on this network). Do I need to change something in router settings or what can I try? By the way, I use wired connection to router. EDIT: There's no problems when using Windows. EDIT: ifconfig: eth0 Link encap:Ethernet HWaddr f2:4d:a0:c0:3f:4c inet addr:192.168.11.8 Bcast:192.168.11.255 Mask:255.255.255.0 inet6 addr: fe80::f24d:a2ff:fec6:3f4c/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:206798 errors:0 dropped:0 overruns:0 frame:0 TX packets:219570 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:76680734 (76.6 MB) TX bytes:21738160 (21.7 MB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:160 errors:0 dropped:0 overruns:0 frame:0 TX packets:160 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:11094 (11.0 KB) TX bytes:11094 (11.0 KB)` ping -c 2 4.2.2.2 PING 4.2.2.2 (4.2.2.2) 56(84) bytes of data. --- 4.2.2.2 ping statistics --- 2 packets transmitted, 0 received, 100% packet loss, time 1007ms ping -c 2 google.com PING google.com (213.159.32.147) 56(84) bytes of data. 64 bytes from lan-213-159-32-147.kns.skynet.lv (213.159.32.147): icmp_seq=1 ttl=61 time=0.936 ms 64 bytes from lan-213-159-32-147.kns.skynet.lv (213.159.32.147): icmp_seq=2 ttl=61 time=0.937 ms --- google.com ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1001ms rtt min/avg/max/mdev = 0.936/0.936/0.937/0.030 ms

    Read the article

  • No sound after upgrading to Ubuntu 11.10 from win7

    - by Tilman
    just as a prefix to my question, i'd like to note that i'm just now entering the world of Linux (unless you count my android, but that's a very different experience...) i have two computers now that run Ubuntu 11.10, the first of which i've had very little problems with, aside from figuring out the basics. the second, from which i'm writing this question, has (up to this point) only had one problem.... no sound. i've read a couple questions similiar and found little help as the component catalog doesn't have my computer listed. (in fact i'm not suprised this is a pos i had my mom grab from her work before they officially closed the doors behind them) had perfect sound before hand, and no sound now. sudo lspci -v brings up 00:1b.0 Audio device: Intel Corporation N10/ICH 7 Family High Definition Audio Controller (rev 01) Subsystem: Intel Corporation Device d608 Flags: bus master, fast devsel, latency 0, IRQ 45 Memory at ff980000 (64-bit, non-prefetchable) [size=16K] Capabilities: [50] Power Management version 2 Capabilities: [60] MSI: Enable+ Count=1/1 Maskable- 64bit+ Capabilities: [70] Express Root Complex Integrated Endpoint, MSI 00 Capabilities: [100] Virtual Channel Capabilities: [130] Root Complex Link Kernel driver in use: HDA Intel Kernel modules: snd-hda-intel any help would be greatly appreciated, me and my gf just wanna watch a damn movie lol

    Read the article

  • Where have I been for the last month?

    - by MarkPearl
    So, I have been pretty quiet for the last month or so. True, it has been holiday time and I went to Cape Town for a stunning week of sunshine and blue skies, but the second I got back home I spent the remainder of my holiday on my pc viewing tutorials on www.tekpub.com Craig Shoemaker, who I got in contact with because of his podcast, sent me a 1 month free subscription to the site and it has been really appreciated. I have done a lot of WPF programming in the past, but not any asp.net stuff and so I used the time to get a peek at asp.net mvc2 as well as a bunch of other technologies. I just wished I had more spare time to do the rest of the videos. While I didn’t understand all of what was being shown on the asp.net stuff (it required previous asp.net expertise), the site was a really good jump start to someone wanting to learn a new technology and broaden the horizons and I would highly recommend it, My only gripe is that in South Africa we have limited bandwidth and bandwidth speeds and so I spent a lot of my monthly bandwidth on the site and had to top up with my ISP several times because of the high quality video captures that the site did. I would have preferred to download the video’s, but apparently that is only available to people who have the yearly subscription fee. Other than that, great site and thanks a ton Craig!

    Read the article

  • Hosting and domain registrations for multiple clients under a single hosting account of mine?

    - by letseatfood
    I am finally getting regular work designing, developing, and deploying websites for small businesses and individuals. So far the websites utilize single-user content management systems, so the websites create, as far as I know, minimal load on the shared servers. I have always required that each of my clients purchase annual shared hosting at Dreamhost. For domain registration, I ask that they register with Dreamhost, but some already have a registered domain elsewhere and this is fine with me. I do this so the billing issues are the client's responsibility, not mine. My question is: Since I can register unlimited domains and connect them to my one shared hosting account at Dreamhost, should I not be requiring clients to individually pay for shared hosting and a domain? Should I actually be paying for one hosting account and then hosting all of my client's websites on that account? As I said before, I currently have each client buy their own hosting, because I feel that, for example, if there is high traffic to their site, there would be less a chance of the site going down than if their site was hosted with many others on one account. I am famous for being long-winded, please let me know if I can clarify at all. Thanks!

    Read the article

  • Worthless Anti-Spam (What can we learn)

    - by smehaffie
    I recently can across a site that had a “anti-spam” field at the bottom of the entry from.  The first issue I had with it was that at 1280X800 you could not read the value you were suppose to enter (see below).  You tell me, should you enter div, dlv, piv, or plv. But even worse than not being readable at high resolutions is the fact that the programmer who coded it really did not understand what this was used for.  An anti-spam (aka: catpcha) entry field should not be able to be read by looking at the HTML DOM object (so entry of value cannot be scripted).  In this case the value is simply a disabled text input filed that has the value you need to type.  So a hacker would simply need to search for text input field named “spam2” and then they could flood the site with spam. 1: <td> 2: <label> 3: <input name="spam1" type="text" class="small" id="spam1" size="6" maxlength="3" /> 4: <input name="spam2" type="text" class="small" id="spam2" value="plv" 5: disabled="disabled" size="6" maxlength="3" /> 6: * <span class="small">- Anti-SPAM key - please enter matching value</span> 7: </label> 8: </td>   There are some things to learn from this example: 1) Always make sure you understand why you are coding a feature/function for any program you write.  Just following the requirements without realizing the “why” will sooner or later come back to bite you.  I think the above example appears to be an example of this. 2) Always check how the screen appears in different resolutions.  In this case it was pretty much unreadable in 1280x800, but you could read it in 800X600 (but most people I know do not have their resolution set that low).  Lucky for me I could “View Source” and get the value I needed to enter.

    Read the article

  • WebCenter at Oracle Day Toronto

    - by Lance Shaw
    The Oracle Day event took place in Toronto yesterday at the Hyatt Regency Hotel downtown.  Attendance was excellent and it was standing room only at the keynote sessions.   Anytime the venue has to bring in chairs to handle the overflow crowd, you know there is a lot of interest! This year, WebCenter was featured prominently as part of the Fusion Middleware session track.  What was interesting to see was just how many customers are interested in consolidating and simplifying their existing infrastructure.  So many companies are still struggling with information silos such as file shares, SharePoint Sites and a myriad of departmental or process-centric repositories.  Naturally, these get more and more expensive to manage over time so there is a high level of interest in reducing the size, scope and cost of this infrastructure.  When companies see how they can use Fusion Middleware and related technologies to integrate with WebCenter Content, Imaging and other solutions to centralize content delivery across business applications, they quickly realize that there are significant cost savings to be had. Oracle Day Events are happening all over the world and there is likely going to be one near you.  To check out the full list and to register, visit the Event page here.  It is a great way to not only hear about WebCenter and how it can be used to your advantage, but also a great way to learn about the broader set of related products in the Fusion Middleware portfolio that are available to extend and enhance the power of your particular business solutions. If you cannot make it, or missed the event in your area, be sure to visit our new WebCenter Content page with a variety of informative assets all in one simple location.  It's a new page designed to provide you with easy access to customer stories, videos, whitepapers, webcasts and more.  We hope you find it valuable!

    Read the article

  • Transitioning from Internal to Public Speaking

    - by TJB
    For whatever reason, I've always enjoyed giving presentations. As a developer, I've grown from giving the rare presentation when asked to frequently doing 'brown bag' talks and other presentations on new technology, projects etc. I'd like to expand as a presenter and start giving talks in public, outside of just my workplace, and I'm looking for tips on how to get there. At a high level, I'd love to know a good path to take & useful tips to help me grow from just giving internal talks to my group (10-20 people) to eventually be a presenter at medium-large sized conferences. Here are some specific questions, but I will take any advice you can offer: 1. How much experience do I need to speak at user groups etc? I've been in industry for around 5 years, which pales in comparison to most speakers that I normally see. 2. What is a good venue for my 1st public talk? 3. What surprises can I expect when transistioning from speaking to a small group of friends to presenting in public to strangers? I live in southern California and my background is mostly .net / web, so if you have any specific user group / venues those are also greatly appreciated.

    Read the article

  • How can I fix my xvinfo?

    - by YumYumYum
    How can i fix my X server/driver? $ xvinfo X-Video Extension version 2.2 screen #0 no adaptors present Additional info: $ uname -a Linux desktop 2.6.32-33-generic #70-Ubuntu SMP Thu Jul 7 21:13:52 UTC 2011 x86_64 GNU/Linux $ lspci 00:00.0 Host bridge: Intel Corporation Device 0100 (rev 09) 00:02.0 VGA compatible controller: Intel Corporation Sandy Bridge Integrated Graphics Controller (rev 09) 00:16.0 Communication controller: Intel Corporation Cougar Point HECI Controller #1 (rev 04) 00:19.0 Ethernet controller: Intel Corporation Device 1503 (rev 05) 00:1a.0 USB Controller: Intel Corporation Cougar Point USB Enhanced Host Controller #2 (rev 05) 00:1b.0 Audio device: Intel Corporation Cougar Point High Definition Audio Controller (rev 05) 00:1c.0 PCI bridge: Intel Corporation Cougar Point PCI Express Root Port 1 (rev b5) 00:1c.1 PCI bridge: Intel Corporation Cougar Point PCI Express Root Port 2 (rev b5) 00:1c.3 PCI bridge: Intel Corporation Cougar Point PCI Express Root Port 4 (rev b5) 00:1d.0 USB Controller: Intel Corporation Cougar Point USB Enhanced Host Controller #1 (rev 05) 00:1f.0 ISA bridge: Intel Corporation Device 1c4a (rev 05) 00:1f.2 SATA controller: Intel Corporation Cougar Point 6 port SATA AHCI Controller (rev 05) 00:1f.3 SMBus: Intel Corporation Cougar Point SMBus Controller (rev 05) 01:00.0 PCI bridge: Integrated Technology Express, Inc. Device 8892 (rev 10) 04:00.0 USB Controller: NEC Corporation Device 0194 (rev 04) Follow up: It seems in 64-bit its a mess doing existing approach. Therefore, after upgrading to 12.04 64-bit this problems in same hardware is resolved (of-course, i have now other drivers problem)

    Read the article

  • Welcome!

    - by mannamal
    Welcome to the Oracle Big Data Connectors blog, which will focus on posts related to integrating data on a Hadoop cluster with Oracle Database. In particular the blog will focus on best practices, usage notes, and performance tips for using Oracle Loader for Hadoop and Oracle Direct Connector for HDFS, which are part of Oracle Big Data Connectors. Oracle Big Data Connectors 1.0 also includes Oracle R Connector for Hadoop and Oracle Data Integrator Application Adapters for Hadoop. Oracle Loader for Hadoop: Oracle Loader for Hadoop loads data from Hadoop to Oracle Database. It runs as a MapReduce job on Hadoop to partition, sort, and convert the data into an Oracle-ready format, offloading to Hadoop the processing that is typically done using database CPUs. The data is thenloaded to the database by the Oracle Loader for Hadoop job (online load) or written out as Oracle Data Pump files for load and access later (offline load) with Oracle Direct Connector for HDFS. Oracle Direct Connector for HDFS: Oracle Direct Connector for HDFS is a connector for high speed access of data on HDFS from Oracle Database. With this connector Oracle SQL can be used to directly query data on HDFS. The data can be Oracle Data Pump files generated by Oracle Loader for Hadoop or delimited text files. The connector can also be used to load data into the database using SQL.

    Read the article

  • OpenJDK In The News: AMD and Oracle to Collaborate in the OpenJDK Community [..]

    - by $utils.escapeXML($entry.author)
    During the JavaOne™ 2012 Strategy Keynote, AMD (NYSE: AMD) announced its participation in OpenJDK™ Project “Sumatra” in collaboration with Oracle and other members of the OpenJDK community to help bring heterogeneous computing capabilities to Java™ for server and cloud environments. The OpenJDK Project “Sumatra” will explore how the Java Virtual Machine (JVM), as well as the Java language and APIs, might be enhanced to allow applications to take advantage of graphics processing unit (GPU) acceleration, either in discrete graphics cards or in high-performance graphics processor cores such as those found in AMD accelerated processing units (APUs).“Affirming our plans to contribute to the OpenJDK Project represents the next step towards bringing heterogeneous computing to millions of Java developers and can potentially lead to future developments of new hardware models, as well as server and cloud programming paradigms,” said Manju Hegde, corporate vice president, Heterogeneous Applications and Developer Solutions at AMD. “AMD has an established track record of collaboration with open-software development communities from OpenCL™ to the Heterogeneous System Architecture (HSA) Foundation, and with this initiative we will help further the development of graphics acceleration within the Java community.”“We expect our work with AMD and other OpenJDK participants in Project “Sumatra” will eventually help provide Java developers with the ability to quickly leverage GPU acceleration for better performance,” said Georges Saab, vice president, Software Development, Java Platform Group at Oracle. "We hope individuals and other organizations interested in this exciting development will follow AMD's lead by joining us in Project “Sumatra."Quotes taken from the first press release from AMD mentioning OpenJDK, titled "AMD and Oracle to Collaborate in the OpenJDK Community to Explore Heterogeneous Computing for Java ".

    Read the article

  • The latest version of the EJB 3.2 spec available on java.net project

    - by Marina Vatkina
    If you are not following us on the users alias, here is a quick update. Just before JavaOne, I uploaded the latest version of the EJB 3.2 Core document to the ejb-spec.java.net downloads. If you want to see the detailed changes, download it If you are interested in the high-level list, or would like to know what to look for, this is the list of changes since the previous version (found on the same download page): Specified that the SessionContext object in a the singleton session bean is thread-safe Clarified that the EJB timers distribution and failover rules apply only to persistent timers Clarified that non-persistent timers returned by getTimers and getAllTimers methods are from the same JVM as the caller Fixed section numbering (left over after moving it to its own chapter) in Ch 17 Noted that only 3.0 and 3.1 deployment descriptors are required to be supported in EJB 3.2 Lite for prior versions of the applications Fixes for EJB_SPEC-61 (Ambiguity in EJB lite local view support) and EJB_SPEC-59 (Improve references to the component-defining annotations) JMS/MDB changes: added new standard activation properties and the unique identifier, and rearranged sections for easier navigation Fixed unresolved cross-refs Updated the rule: only local asynchronous session bean invocations are supported in EJB 3.2 Lite Synchronized permissions in the Table with the permissions listed for the EJB Components in the Java EE Platform Specification Table EE.6-2 Specified that during processing of the close() method, the embeddable container cancels all pending asynchronous invocations and non-persistent timers Updated most of the referenced documents to their latest versions Happy reading!

    Read the article

  • Microsoft Lowers Cloud Barrier To Entry

    - by Herve Roggero
    Once in a while, the technology stack changes enough to create a disturbance in the IT industry. Microsoft did just that today and has officially closed the gap with its #1 competitor: Amazon. What is remarkable is that Microsoft is no longer an alternative to Amazon, it is becoming a clear leader in that space. Some of the new features include official support for durable Virtual Machines with high availability (cross-geographic replication), free WebSites to try Azure, MySQL database at no charge, a new distributed low-latency cache feature, Linux support, support with existing VPN hardware for seamless on-premise integration, a new partner ecosystem and much, much more. Amazon had an edge against Windows Azure in the IaaS (Infrastructure as a Service) space, until now. With the latest release from Microsoft Azure, the gap has been filled. In fact, it seems Amazon may now have a gap to fill… This is great news to everyone; it seems that cloud offerings are becoming more standardized with the more mature cloud providers, and the management stack and quality of service of each cloud provider is increasingly becoming the differentiator. With today’s announcements, it is becoming clear that cloud providers are pushing hard to increase their service footprint and lowering typical barriers to entry such as support for open-source operating systems, free trial offers, higher availability, faster deployment times and simpler enterprise integration.

    Read the article

< Previous Page | 229 230 231 232 233 234 235 236 237 238 239 240  | Next Page >