Search Results

Search found 27233 results on 1090 pages for 'information quality'.

Page 621/1090 | < Previous Page | 617 618 619 620 621 622 623 624 625 626 627 628  | Next Page >

  • What are the specifications on the RAM inside my computer?

    - by Faken
    I have a Dell Dimension 9200 that I bought 4 years ago. I want to find out the exact specifications (manufacturer, speed, timings, etc.). Is there a way to get this specific info without having to open up the PC (it's buried in and under a bunch of furniture, I'd perfer not to have to dig it out). All I know about it right now is that it is 4 1GB sticks of DDR2 RAM at 667 Mhz. It is the standard RAM that shipped with the computer 4 years ago (from Dell). Does anyone know what the specifications are of the RAM that Dell used in this particular model of computer 4 years ago? Note: I've done my research before coming here. CPU-Z, EVEREST, and AIDA32 all have been unable to give me any more information other than 4 x 1GB @ 667Mhz. I can't find any specifications in the Dell online manuals either (at least not as specific as I want). Thanks -Faken

    Read the article

  • Dirt Cheap Bi-Directional Antenna Wirelessly Extends Your LAN

    - by Jason Fitzpatrick
    If you’re looking for an effective way to link remote LANs without the hassle of laying cable, this DIY bi-directional antenna is a quick (and cheap) method for bringing internet access to outbuildings and other locations. Tinker Danilo Larizza needed to share internet access between apartments that are relatively close together but not hardwired–ruling out simply sharing the access via existing LAN infrastructure. His solution combines a simple scrap wire antenna array mounted inside a plastic food bin (seen here with the cover removed to show the antenna) and some coaxial cable to link the antenna to two routers. Our favorite part about his build is that he constructed the pair to establish if the antenna setup would even work in his location and intended to buy commercial antennas if it did; his Tupperware models worked so well, however, they’re now the permanent solution. Hit up the link below for more information about the project. 2.4 Ghz Directive Biquad Antenna [via Hack A Day] How To Use USB Drives With the Nexus 7 and Other Android Devices Why Does 64-Bit Windows Need a Separate “Program Files (x86)” Folder? Why Your Android Phone Isn’t Getting Operating System Updates and What You Can Do About It

    Read the article

  • Simplest way to render image over top of another with another image used as mask in OpenGL?

    - by Adam Naylor
    The effect I'm looking for is to have a single large background image that is always visible (at full alpha) and then show a second image (what I call a light map or specular map) that is partially shown over the top based on a third image (which is effectively a mask). The effect is similar to this effect except instead of simply darkening or lightening the background image using the third image it needs to mask the second without effecting the first at all. The third image is the only one that moves therefore hard baking the third images alpha into the second image isn't an option. If my explanation isn't clear I'll provide visual examples when I have more time. I'd prefer not to go down a shader route as I haven't taught myself this area yet so unless I have too I'd rather try to achieve this with simple alpha blending. Happy to use a shader approach. Cheers. Additional These third images are obviously light sources being cast onto the first image showing the specular information from the second image to simulate the light 'shining' off the objects in the first image. The solution I implement will need to allow two light sources to potentially overlap so my current thoughts are that the alpha values of the two images will need to be combined (Added?) to produce a final image which masks the second image? Don't worry about things like coloured lights. For this technique the lights are all considered white.

    Read the article

  • Enterprise Manager Grid Control licencelése

    - by Lajos Sárecz
    Gyakran kapok kérdéseket az Oracle Enterprise Manager Grid Control licencelésével kapcsolatban, ezért az alábbiakban igyekszem összefoglalni a legfontosabb információkat. Az alábbi ismerteto nem teljes köru, mivel számos olyan termék van (Data Masking, Real Application Testing, Real User Experience Insight, Application Testing Suite), melyek kapcsolódnak az Enterprise Manager-hez, azonban licencelésük másképp muködik. Az Enterprise Manager licenceléssel kapcsolatban az elsodleges információ forrás a Licensing Information doksi. A legfontosabb információk: - A Grid Control keretrendszer (Agent-ek és a konzol az alapfunkciókkal - lásd késobb) önmagában ingyenes, sot restricted-use licencet tartalmaz Oracle Database-re, amennyiben azt csak az Oracle Management Repository céljára használják. Fontos, hogy ez nem tartalmaz egyéb Oracle Database opciókat, mint például a RAC! Hasonlóképpen az Oracle WebLogic Server is kizárólagosan az Oracle Management Server kiszolgálására használható ingyenesen, de fürtözés nélkül. - A Grid Control alapfunkcionalitása: Discovery, Groups, Job Scheduling, Real time availability, Performance & monitoring, Target Home Pages, Administration, Console alerts - Az alapfunkcionalitás felügyelt termékektol függoen bovítheto Management Pack, Plug-in és Connector termékekkel. Alapvetoen ezek licencelése mindig a monitorozott, felügyelt termék licenceléséhez kell, hogy igazodjon. Tehát például ha 2 adatbázis szerverre szeretnénk Diagnostic Pack-ek használni, akkor mindkettore kell CPU vagy NUP (Named User Plus) licencet vásárolni, attól függoen az adatbázis maga milyen licenccel rendelkezik. Megjegyzem ezt a konkrét Management Pack-ek kizárólag Enterprise Edition Database esetén lehet alkalmazni. - Számos fizetos funkció külön telepítés nélkül is elérheto a Grid Control felületén (ugyanez igaz Database Control-ra és Fusion Middleware Control-ra is). Hogy elkerüljük a licenc sértést, érdemes ellenorízni hogy az adott környezetben mely Management Pack-ek használata került bekapcsolásra. Ezt a Grid Control Setup menüjében a Management Pack Access almenüben tehetjük meg legegyszerubben. Részleteseb leírás itt található. Database Diagnostic és Tuning Pack adatbázis szintu kikapcsolására is lehetoség van, hogy parancssorból se lehessen használni oket, errol korábban már írtam. Az egyes management termékek USD ára megtalálható az árlistában. Ha valami fontos kimaradt, várom a kérdéseket, hozzászólásokat, és igény szerint bovítem a fentieket.

    Read the article

  • Wake up and record in Windows Media Center on a Mac Mini

    - by Sir Code-A-Lot
    I'm currently considering buying a Mac Mini to use as a media center. I plan to install Windows 7 (or 8) on it, using Boot Camp. Will it be able to go into standby or hibernate (S3, S4?) and wake up to record TV scheduled in Windows Media Center? I haven't been able to find concrete information on supported standby types when running Windows under boot camp, and if Windows will even be able to wake when a recording should start. I just want to be clear on any limitations in this area before I buy anything.

    Read the article

  • Oracle Business Intelligence Applications 10g Bootcamp

    - by mseika
    Oracle Business Intelligence Applications 10g Bootcamp 12th - 15th February 2012, Reading (UK) The Oracle Business Intelligence Applications offer out-of-the-box integration with Siebel CRM and Oracle eBusiness Suite and provide pre-built Operational BI solutions for eBusiness Suite, Peoplesoft, Siebel, and SAP. This training will provide attendees with an in-depth working understanding of the architecture, the technical and the functional content of the Oracle Business Intelligence Applications, whilst also providing an understanding of their installation, configuration and extension. The course will cover the following topics:• Overview of Oracle Business Intelligence Applications• Oracle BI Applications Fundamentals and Features• Configuring BI Applications for Oracle E-Business Suite• Understanding BI Applications Architecture• Fundamentals of BI Applications Security REGISTER NOW Partner Registration Guide Price: FREE Cookham RoomOracle Corporation UK LtdOracle ParkwayThames Valley ParkReading, Berkshire RG6 1RA12th - 15th February 20129:30 am – 5:00 pm BST AudienceThe seminar is aimed at BI Consultants and Implementation Consultants within Oracle's Gold and Platinum Partners. Prerequisites• Good understanding of basic data warehousing concepts• Hands on experience in Oracle Business Intelligence Enterprise Edition• Hands on experience in Informatica• Some understanding of Oracle BI Applications is required (See Sales & Technical Tutorials for OBI, BI-Apps and Hyperion EPM) • Good understanding of any of the following Oracle EBS modules: General Ledger, Accounts Receivables, Accounts Payables System Requirements Please note that attendees are required to have a laptop. Laptop• 4GB RAM-Recognized by Windows 64 bits• 80GB free space in Hard drive or External Device• CPU Core 2 Duo or HigherOperating System Requirements• Windows 7, Windows XP, Windows 2003• NOT ALLOWED with Windows Vista• An Administrator User For more information please contact [email protected].

    Read the article

  • aligning truecrypt partition on 1.5TB 4kB sector drive

    - by pQd
    hi, aligning partitions to start at real physical sector of ssds / stripped raids / 4kB drives is a 'good thing to do'. but i've run into a problems when trying to do it for a truecrypt partition that will contain ext3 on it. or so it seems. when drive is question is partitioned properly and formatted with ext3 i get very reasonable write speeds around 70-80MB/s, but when i put truecrypt and ext3 on the top of it write performance becomes very unstable and goes between 1-25MB/s with very high io-wait. on the same server i dont have any performance issues with ext3 on the top of truecrypt on regular 512B-sector 500GB sata disks. so my best guess is that iowaits are caused by misalignment but i cannot really find reliable information on how to calculate optimal partition beginning. i've tried to start it at 128 logical sector, i've also tried 8132 sector as suggested here but both gave me very bad and unstable performance. do you have any experience with similar setup? thanks!

    Read the article

  • Broke Ubuntu 12.04 either with Screencloud or Ubuntu Tweak

    - by pgrytdal
    Earlier today I installed the Screen Shot app "Screencloud" and after the installation Firefox was having problems. I didn't think much about it, just closed Firefox, then proceeded to change the toolbar opacity on Ubuntu Tweak. After restarting my system, hoping it would fix Firefox (because, you know, restarting fixes everything ;) ) I am able to go to the Log In Screen, type in my password, but then I get 3 errors regarding the Screencloud Ubuntu One, Imgur, and Dropbox plugins. After clicking the "Okay" on all three errors, all I see is my wallpaper. I am not able to access the terminal VIA CtrlAltT, but I am able to log out VIA CtrlAltDelete. I have tried all desktop environments I have installed (that, except for Unity 2D, has to do with Cairo Dock.) I hope I have provided enough information. If you need more, please ask. Please help! I would like to not have to re-install Ubuntu. Other info: OS: Ubuntu 12.04 I also have a Ubuntu 12.10 Live USB, but Ubuntu 12.10 hasn't run very well on any of my computers.

    Read the article

  • Breaking The Promise of Web Service Interoperability

    The promise of web service interoperability is achievable if certain technical and non-technical issues are dealt with properly. As the world gets smaller and smaller thanks to our growing global economy the need for security is increasing. The use of security is vital in the transferring of data from one server to another. As new security standards and protocols are created, the environments for web service hosts and clients must be in sync so that they can communicate on the same standard and protocols. For example, if a new protocol x can only be implemented on computers built after 2010 then all computers built prior to 2010 will not be able to connect to any web service hosts that only use this protocol in its security policy. If both the host and client of a web service cannot communicate using a set of common standards and protocols then web services are not available to these clients thus breaking the promise of interoperability. Another limiting factor of web services is governmental policies and regulations. I have experienced this first hand last year when I had to work on a project that dealt with personally identifiable information (PII) regarding US and Canadian Citizens. Currently the Canadian government regulates that any data pertaining to Canadian citizens must be store in Canada only. The issue that we had was that fact that we are a US based company that sometimes works with Canadian PII as part of a service that we provide. As you can see we are US based company and dealing with Canadian Data, so we had to place a file server inside the border of Canada in order for us to continue working for our Canadian customers.

    Read the article

  • In c-panel mail goes in spam instead of inbox in gmail

    - by Robin Jain
    I have c-panel vps server I have create a domain in the same server but when I sent a mail through webmail to gmail email id it goes into spam. Note--->Mail ip note blacklisted Spf records enable DKIM enable reverse dns are perfect ====================================================================== Email header Information: Delivered-To: [email protected] Received: by 10.143.93.13 with SMTP id v13csp119806wfl; Fri, 6 Jul 2012 08:01:36 -0700 (PDT) Received: by 10.182.52.42 with SMTP id q10mr26133912obo.46.1341586895571; Fri, 06 Jul 2012 08:01:35 -0700 (PDT) Return-Path: <[email protected]> Received: from lakshyacs-u.securehostdns.com ([50.97.147.134]) by mx.google.com with ESMTPS id fx3si18028369obc.144.2012.07.06.08.01.35 (version=TLSv1/SSLv3 cipher=OTHER); Fri, 06 Jul 2012 08:01:35 -0700 (PDT) Received-SPF: pass (google.com: domain of [email protected] designates 50.97.147.134 as permitted sender) client-ip=50.97.147.134; Authentication-Results: mx.google.com; spf=pass (google.com: domain of [email protected] designates 50.97.147.134 as permitted sender) [email protected] Received: from localhost.localdomain ([127.0.0.1]:39016 helo=harishjoshico.com) by lakshyacs-u.securehostdns.com with esmtpa (Exim 4.77) (envelope-from <[email protected]>) id 1SnA2J-0006Nq-05 for [email protected]; Fri, 06 Jul 2012 20:31:35 +0530 Received: from 223.189.14.213 ([223.189.14.213]) (SquirrelMail authenticated user [email protected]) by harishjoshico.com with HTTP; Fri, 6 Jul 2012 20:31:35 +0530 Message-ID: <[email protected]> Date: Fri, 6 Jul 2012 20:31:35 +0530 Subject: ggglkhl From: [email protected] To: [email protected] User-Agent: SquirrelMail/1.4.22 MIME-Version: 1.0 Content-Type: text/plain;charset=iso-8859-1 Content-Transfer-Encoding: 8bit X-Priority: 3 (Normal) Importance: Normal X-AntiAbuse: This header was added to track abuse, please include it with any abuse report X-AntiAbuse: Primary Hostname - lakshyacs-u.securehostdns.com X-AntiAbuse: Original Domain - gmail.com X-AntiAbuse: Originator/Caller UID/GID - [47 12] / [47 12] X-AntiAbuse: Sender Address Domain - harishjoshico.com jhkhl ================================================================

    Read the article

  • MQTT, GWT, ActiveMQ stack to bring jms to the browser

    - by scphantm
    I am in the preliminary stages of architecting a legacy replacement project. They already have sub half second performance on their green screens and they want the same on their web app. We have a 390 mainframe that can handle anything we throw at it but they don't have a good jvm for it, so we have two tiers of websphere servers between the mainframe and the browser, The ui server, and the bl server. For the ui, I'm leaning towards GWT. But one thing that I think would seal the deal is to add messaging capabilities to the browser. The idea is say you click on a link that displays a second panel of information, instead of the classic GWT where it triggers a GWT-RPC call to the ui server, the ui server routs it to the bl server, the bl sends it to the mainframe and back out, it drops an MQTT message directly to the bl server or directly to the mainframe. Say writes go to the bl, reads go to the mainframe. This is an easy enough thing in classic jms because you can issue a message that has an expected response. Then have your callback ready to get the resonse. But from what I'm reading so far. It looks like mqtt doesn't have that. It looks like it's strictly fire and forget, which would make it really tough to come up with a way to get a response back to the workstation that called it. Am I right here? Has anyone tried this stack before with gwt.

    Read the article

  • mdadm: brakes boot due to "is not ready yet or not present" error

    - by BarsMonster
    This is so damn frustrating :-| I've spent like 20 hours on this nice error, and seems like dozens of people over Internet too, and no clear solution yet. I have non-system RAID-5 of 5 disks, and it's fine. But during boot up it says that "/dev/md0 is not ready yet or not present" and asks to press 'S'. Very nice for Ubuntu Server - I have to bring monitor and keyboard to go next. After this system boots and it's all fine. md0 device works, /proc/mdstat is fine. When I do mount -a - it mounts this array without errors and works fine. As a dumb and shameful workaround I added noauto in /etc/fstab, and did mounting in /etc/rc.local - it works fine then. Any hints how to make it work properly? fstab: UUID=3588dfed-47ae-4c32-9855-2d69df713b86 /var/bigfatdisk ext4 noauto,noatime,data=writeback,barrier=0,nobh,commit=5 0 0 mdadm config: It is autogenerated: # mdadm.conf # # Please refer to mdadm.conf(5) for information about this file. # # by default, scan all partitions (/proc/partitions) for MD superblocks. # alternatively, specify devices to scan, using wildcards if desired. DEVICE partitions # auto-create devices with Debian standard permissions CREATE owner=root group=disk mode=0660 auto=yes # automatically tag new arrays as belonging to the local system HOMEHOST <system> # instruct the monitoring daemon where to send mail alerts MAILADDR CENSORED # definitions of existing MD arrays ARRAY /dev/md/0 metadata=1.2 bitmap=/var/md0_intent UUID=efccbeb6:a0a65cd6:470dcdf3:62781188 name=LBox2:0 # This file was auto-generated on Mon, 10 Jan 2011 04:06:55 +0200 # by mkconf 3.1.2-2

    Read the article

  • Viewing and deleting partitions using the BIOS?

    - by cluelesscoder
    I have an M4A785TD-M EVO Asus motherboard which uses Asus Express Gate for its motherboard (says American Megatrends, Inc at the bottom). I activate it by pressing Del; also says Tab activates BIOS Post but that doesn't seem to do anything. I went into this expecting to see a breakdown of the partitions. I have a 300GB hard-drive separated into 3 partitions. While it does show SATA for my main hard-drive and my disk drive, it doesn't show the partitions. Is this typical? Do I have to us an OS-based tool to delete the partitions or can I delete using my BIOS? I tried updating the BIOS through Asus's Update utility but it appears to be broken (connects/disconnects repeatedly). I used HWiNFO32 to get some information: BIOS Date: 06/30/10 BIOS Version: 2103 EFI BIOS: Not Capable Tried to update but it directs me to biosagentsplus.com which wants $30 for the download (another question would be how to avoid them).

    Read the article

  • An Actionable Common Approach to Federal Enterprise Architecture

    - by TedMcLaughlan
    The recent “Common Approach to Federal Enterprise Architecture” (US Executive Office of the President, May 2 2012) is extremely timely and well-organized guidance for the Federal IT investment and deployment community, as useful for Federal Departments and Agencies as it is for their stakeholders and integration partners. The guidance not only helps IT Program Planners and Managers, but also informs and prepares constituents who may be the beneficiaries or otherwise impacted by the investment. The FEA Common Approach extends from and builds on the rapidly-maturing Federal Enterprise Architecture Framework (FEAF) and its associated artifacts and standards, already included to a large degree in the annual Federal Portfolio and Investment Management processes – for example the OMB’s Exhibit 300 (i.e. Business Case justification for IT investments).A very interesting element of this Approach includes the very necessary guidance for actually using an Enterprise Architecture (EA) and/or its collateral – good guidance for any organization charged with maintaining a broad portfolio of IT investments. The associated FEA Reference Models (i.e. the BRM, DRM, TRM, etc.) are very helpful frameworks for organizing, understanding, communicating and standardizing across agencies with respect to vocabularies, architecture patterns and technology standards. Determining when, how and to what level of detail to include these reference models in the typically long-running Federal IT acquisition cycles wasn’t always clear, however, particularly during the first interactions of a Program’s technical and functional leadership with the Mission owners and investment planners. This typically occurs as an agency begins the process of describing its strategy and business case for allocation of new Federal funding, reacting to things like new legislation or policy, real or anticipated mission challenges, or straightforward ROI opportunities (for example the introduction of new technologies that deliver significant cost-savings).The early artifacts (i.e. Resource Allocation Plans, Acquisition Plans, Exhibit 300’s or other Business Case materials, etc.) of the intersection between Mission owners, IT and Program Managers are far easier to understand and discuss, when the overlay of an evolved, actionable Enterprise Architecture (such as the FEA) is applied.  “Actionable” is the key word – too many Public Service entity EA’s (including the FEA) have for too long been used simply as a very highly-abstracted standards reference, duly maintained and nominally-enforced by an Enterprise or System Architect’s office. Refreshing elements of this recent FEA Common Approach include one of the first Federally-documented acknowledgements of the “Solution Architect” (the “Problem-Solving” role). This role collaborates with the Enterprise, System and Business Architecture communities primarily on completing actual “EA Roadmap” documents. These are roadmaps grounded in real cost, technical and functional details that are fully aligned with both contextual expectations (for example the new “Digital Government Strategy” and its required roadmap deliverables - and the rapidly increasing complexities of today’s more portable and transparent IT solutions.  We also expect some very critical synergies to develop in early IT investment cycles between this new breed of “Federal Enterprise Solution Architect” and the first waves of the newly-formal “Federal IT Program Manager” roles operating under more standardized “critical competency” expectations (including EA), likely already to be seriously influencing the quality annual CPIC (Capital Planning and Investment Control) processes.  Our Oracle Enterprise Strategy Team (EST) and associated Oracle Enterprise Architecture (OEA) practices are already engaged in promoting and leveraging the visibility of Enterprise Architecture as a key contributor to early IT investment validation, and we look forward in particular to seeing the real, citizen-centric benefits of this FEA Common Approach in particular surface across the entire Public Service CPIC domain - Federal, State, Local, Tribal and otherwise. Read more Enterprise Architecture blog posts for additional EA insight!

    Read the article

  • Creating Parent-Child Relationships in SSRS

    - by Tim Murphy
    As I have been working on SQL Server Reporting Services reports the last couple of weeks I ran into a scenario where I needed to present a parent-child data layout.  It is rare that I have seen a report that was a simple tabular or matrix format and this report continued that trend.  I found that the processes for developing complex SSRS reports aren’t as commonly described as I would have thought.  Below I will layout the process that I went through to create a solution. I started with a List control which will contain the layout of the master (parent) information.  This allows for a main repeating report part.  The dataset for this report should include the data elements needed to be passed to the subreport as parameters.  As you can see the layout is simply text boxes that are bound to the dataset. The next step is to set a row group on the List row.  When the dialog appears select the field that you wish to group your report by.  A good example in this case would be the employee name or ID. Create a second report which becomes the subreport.  The example below has a matrix control.  Create the report as you would any parameter driven document by parameterizing the dataset. Add the subreport to the main report inside the row of the List control.  This can be accomplished by either dragging the report from the solution explorer or inserting a Subreport control and then setting the report name property. The last step is to set the parameters on the subreport.  In this case the subreport has EmpId and ReportYear as parameters.  While some of the documentation on this states that the dialog will automatically detect the child parameters, but this has not been my experience.  You must make sure that the names match exactly.  Tie the name of the parameter to either a field in the dataset or a parameter of the parent report. del.icio.us Tags: SQL Server Reporting Services,SSRS,SQL Server,Subreports

    Read the article

  • Partner Showcase -- GreyHeller

    - by PeopleTools Strategy
    This is the next in a series of posts spotlighting some of our creative partners.  GreyHeller is a PeopleSoft-focused software company founded by PeopleTools alumni Larry Grey and Chris Heller.  GreyHeller’s products focus on addressing the technology needs of PeopleSoft customers in the areas of mobile Enablement, reporting/business intelligence, security, and change management.  The company helps customers protect and extend their investment in PeopleSoft.GreyHeller’s products and services are in use by nearly 100 PeopleSoft customers on 6 continents.  Their product solutions are lightweight bolt-ons--extensions to a customer’s PeopleSoft environment requiring no new infrastructure.  This makes for rapid implementations.A major area of interest for PeopleSoft customers these days is mobile enablement.  GreyHeller's current mobile implementations include the following customers: Texas Christian University (Live:  TCU student newspaper article here) Coppin State University (Live) University of Cambridge (June go-live) HealthSouth (June go-live) Frostburg State Univrsity (Q3 go-live) Amedisys (Q3 go-live) GreyHeller maintains a PeopleTools-focused blog that provides tips, techniques, and code snippets aimed at helping PeopleSoft customers make the most of their PeopleSoft system.  In addition to their blog, the GreyHeller team conducts and records weekly webinars that demonstrate latest PeopleTools features and Tips and techniques.  Recordings of these webinars can be accessed here.Visit GreyHeller’s web site for more information on the company and its work.

    Read the article

  • javaws crashes, error in ld-linux-x86-64.so.2

    - by user54214
    I am running Ubuntu 11.10 64 bit client as Dom0 and Xen. I am having problems getting java up and running. Java itself seems to work fine, however I get strange errors, for example when I start javaws. I tried different versions and always get the same errors. I tried openjdk 1.6 and 1.7 as well as sunjava6 and 7. I alway get an error in the same lib All other applications are working fine, so it seems ld-linux-x86-64.so.2 is working fine. Any hints what could be wrong? Ubuntu01:~$ javaws # # A fatal error has been detected by the Java Runtime Environment: # # SIGILL (0x4) at pc=0x00007f4e74c5ad10, pid=7974, tid=139974945277696 # # JRE version: 6.0_23-b23 # Java VM: OpenJDK 64-Bit Server VM (20.0-b11 mixed mode linux-amd64 compressedoops) # Derivative: IcedTea6 1.11pre # Distribution: Ubuntu 11.10, package 6b23~pre11-0ubuntu1.11.10.2 # Problematic frame: # C [ld-linux-x86-64.so.2+0x14d10] _dl_make_stack_executable+0x2b70 # # An error report file with more information is saved as: # /home/r/hs_err_pid7974.log # # If you would like to submit a bug report, please include # instructions how to reproduce the bug and visit: # https://bugs.launchpad.net/ubuntu/+source/openjdk-6/ # The crash happened outside the Java Virtual Machine in native code. # See problematic frame for where to report the bug. # Aborted

    Read the article

  • Should I cache the data or hit the database?

    - by JD01
    I have not worked with any caching mechanisms and was wondering what my options are in the .net world for the following scenario. We basically have a a REST Service where the user passes an ID of a Category (think folder) and this category may have lots of sub categories and each of the sub categories could have 1000 of media containers (think file reference objects) which contain information about a file that may be on a NAS or SAN server (files are videos in this case). The relationship between these categories is stored in a database together with some permission rules and meta data about the sub categories. So from a UI perspective we have a lazy loaded tree control which is driven by the user by clicking on each sub folder (think of Windows explorer). Once they come to a URL of the video file, they then can watch the video. The number of users could grow into the 1000s and the sub categories and videos could be in the 10000s as the system grows. The question is should we carry on the way it is currently working where each request hits the database or should we think about caching the data? We are on using IIS 6/7 and Asp.net.

    Read the article

  • Standards Matter: The Battle For Interoperability Continues

    - by michael.rowell
    Great Article, although it is a little dated at this point. Information Week Article Standards Matter: The Battle for Interoperability goes on Summary If you're guilty of relegating standards support to a "nice to have" feature rather than a requirement, you're part of the problem. If you want products to interoperate, be prepared to walk away if a vendor can't prove compliance. Don't be brushed off with promises of standards support "on the road map." The alternative is vendor lock-in and higher costs, including the cost of maintaining systems that don't work together. Standards bodies are imperfect and must do better. The alternative: splintered networks and broken promises. The point: "The secret sauce to a successful 'working standard' isn't necessarily IETF or another longstanding body," says Jonathan Feldman, director of IT services for the city of Asheville, N.C., and an InformationWeek Analytics contributor. "Rather, an earnest and honest effort by a group that has governance outside of a single corporation's control is what's important." In order to have true interoperability vendors as well as customers must be actively engaged in the standards process. Vendors must be willing to truly work together and not be protecting an existing product. Customers must also be willing to truly to work together and not be demanding a solution that only meets their needs but instead meets the needs of all participants. Ultimately, customers must be willing to reward vendor compliance by requiring compliance in products and services that they purchase and deploy. Managers that deploy systems without compliance to standards are only hurting themselves. Standards do matter. When developed openly and deployed compliantly standards deliver interoperability which provides solid business value.

    Read the article

  • Cannot start Oracle XE 11gR2 Net Listener and Database on Ubuntu 13.04

    - by hydrology
    I have been following the setup step on this article for installing Oracle XE 11g R2 on Ubuntu 13.04. The environment variables PATH, ORACLE_HOME, ORACLE_SID, NLS_LANG ORACLE_BASE have all been set up correctly. simongao:~ 06:16:38$ echo $PATH /usr/lib/lightdm/lightdm:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/home/simongao/adt-bundle-linux-x86_64-20130219/sdk/platform-tools:/u01/app/oracle/product/11.2.0/xe/bin simongao:~ 06:18:36$ echo $ORACLE_HOME /u01/app/oracle/product/11.2.0/xe simongao:~ 06:23:29$ echo $ORACLE_SID XE simongao:~ 06:23:35$ echo $ORACLE_BASE /u01/app/oracle simongao:~ 06:23:37$ sudo echo $LD_LIBRARY_PATH /u01/app/oracle/product/11.2.0/xe/lib simongao:~ 06:23:48$ echo $NLS_LANG /u01/app/oracle/product/11.2.0/xe/bin/nls_lang.sh However, when I try to startup the service, I receive the following error information. simongao:~ 06:18:40$ sudo service oracle-xe start Starting Oracle Net Listener. Starting Oracle Database 11g Express Edition instance. Failed to start Oracle Net Listener using /u01/app/oracle/product/11.2.0/xe/bin/tnslsnr and Oracle Express Database using /u01/app/oracle/product/11.2.0/xe/bin/sqlplus.

    Read the article

  • Blue screen - BCCode: 7F

    - by Joe
    I've recently bought a refurbished Toshiba Satellite A660 11M and I get the blue screen of death appearing fairly frequently now, and I can't seem to work out why or how to prevent/cure it. Firstly I thought it was due to Zone Alarm from what I read on the net, but then I thought it was a memory issue after it crashed whilst I was running Memtest. But after I ran Memtest a second time, it completed without any errors... Here are my computer specs - Toshiba Satellite A660 11M 500 GB hard disk 4 GB RAM Windows 7 Premium Nvidia GeForce GT 330M and here is more info regarding the crashes I keep getting - Problem signature: Problem Event Name: BlueScreen OS Version: 6.1.7600.2.0.0.768.3 Locale ID: 2057 Additional information about the problem: BCCode: 7f BCP1: 0000000000000008 BCP2: 0000000080050033 BCP3: 00000000000006F8 BCP4: FFFFF80003040EC0 OS Version: 6_1_7600 Service Pack: 0_0 Product: 768_1

    Read the article

  • Viewing the full Skype chat history

    - by hekevintran
    I have Skype 2.8 on Mac OS X 10.5.8. Under the the chat menu is an option called "Recent Chats". This allows me to see logs of recent chats, but not of older ones. I know the older ones are stored because they are in ~/Library/Application Support/Skype/username/chatmsg256.dbb. This file when put in a text editor has text chat information from all my previous Skype chats. It is however stored in an unknown file format that I do not know how to parse. Does Skype have a built-in log viewer (like Adium's) that I can use to access these older logs?

    Read the article

  • Choosing Technology To Include In Software Design

    How many of us have been forced to select one technology over another when designing a new system? What factors do we and should we consider? How can we ensure the correct business decision is made? When faced with this type of decision it is important to gather as much information possible regarding each technology being considered as well as the project itself. Additionally, I tend to delay my decision about the technology until it is ultimately necessary to be made. The reason why I tend to delay such an important design decision is due to the fact that as the project progresses requirements and other factors can alter a decision for selecting the best technology for a project. Important factors to consider when making technology decisions: Time to Implement and Maintain Total Cost of Technology (including Implementation and maintenance) Adaptability of Technology Implementation Team’s Skill Sets Complexity of Technology (including Implementation and maintenance) orecasted Return On Investment (ROI) Forecasted Profit on Investment (POI) Of the factors to consider the ROI and POI weigh the heaviest because the take in to consideration the other factors when calculating the profitability and return on investments.For a real world example let us consider developing a web based lead management system for a new company. This system can either be hosted on Microsoft Windows based web server or on a Linux based web server. Important Factors for this Example Implementation Team’s Skill Sets Member 1  Skill Set: Classic ASP, ASP.Net, and MS SQL Server Experience: 10 years Member 2  Skill Set: PHP, MySQL, Photoshop and MS SQL Server Experience: 3 years Member 3  Skill Set: C++, VB6, ASP.Net, and MS SQL Server Experience: 12 years Total Cost of Technology (including Implementation and maintenance) Linux Initial Year: $5,000 (Random Value) Additional Years: $3,000 (Random Value) Windows Initial Year: $10,000 (Random Value) Additional Years: $3,000 (Random Value) Complexity of Technology Linux Large Learning Curve with user driven documentation Estimated learning cost: $30,000 Windows Minimal based on Teams skills with Microsoft based documentation Estimated learning cost: $5,000 ROI Linux Total Cost Initial Total Cost: $35,000 Additional Cost $3,000 per year Windows Total Cost Initial Total Cost: $15,000 Additional Cost $3,000 per year Based on the hypothetical numbers it would make more sense to select windows based web server because the initial investment of the technology is much lower initially compared to the Linux based web server.

    Read the article

  • How do I find out which process is eating up my bandwidth?

    - by Bruce Connor
    I think I'm being the victim of a bug here. Sometimes while I'm working (I still don't know why), my network traffic goes up to 200 KB/s and stays that way, even tough I'm not doing anything internet-related. This sometimes happens to me with the CPU usage. When it does, I just run a top command to find out which process is responsible and then kill it. Problem is: I have no way of knowing which process is responsible for my high network usage. Both the resource monitor and the top command only tell me my total network usage, neither of them tells me process specific network info. Is there another command I can use to find out which process is getting out of hand? I've already tried killing all the obvious ones (firefox, update-manager, pidgin, etc) with no luck. So far, restarting the machine is the only way I found of getting rid of the issue. EDIT: (just to be clear) I've found questions here about monitoring total bandwidth usage, but, as I mentioned, that's not what I need. UPDATE: The command iftop gives results that disagree entirely with the information reported by System Monitor. While the latter claims there's high network traffic, the former claims there's barely 1 KB/s. Thanks

    Read the article

  • I have deleted python files in usr/bin and cant reinstall it

    - by Plonkaa
    I am a novice at Ubuntu and unfortunately i have deleted 3 files in the usr/bin folder python 2.7 python python 2.6 Now my update manager wont work and when i type in python into gnome it says that it is no longer there. Please help me ive tried loads of different things but it just wont work. The closest i got was the following: I typed in sudo apt-get -f install and i thought i had fixed it but then i got a error message - Reading package lists... Done Building dependency tree Reading state information... Done The following packages were automatically installed and are no longer required: gir1.2-folks-0.6 gir1.2-polkit-1.0 libcogl5 mutter-common gir1.2-json-1.0 libcaribou0 gir1.2-accountsservice-1.0 gir1.2-clutter-1.0 gir1.2-gkbd-3.0 gir1.2-networkmanager-1.0 caribou libcogl-common libmutter0 gir1.2-mutter-3.0 gjs gir1.2-caribou-1.0 libclutter-1.0-0 gir1.2-telepathylogger-0.2 libclutter-1.0-common cups-pk-helper gir1.2-upowerglib-1.0 gir1.2-cogl-1.0 libmozjs185-1.0 gir1.2-telepathyglib-0.12 gir1.2-gee-1.0 libgjs0c gnome-shell-common Use 'apt-get autoremove' to remove them. The following extra packages will be installed: ubuntu-sso-client The following packages will be upgraded: ubuntu-sso-client 1 upgraded, 0 newly installed, 0 to remove and 35 not upgraded. 2 not fully installed or removed. Need to get 0 B/57.7 kB of archives. After this operation, 16.4 kB of additional disk space will be used. Do you want to continue [Y/n]? y Setting up python-minimal (2.7.2-7ubuntu2) ... /var/lib/dpkg/info/python-minimal.postinst: 4: python2.7: not found dpkg: error processing python-minimal (--configure): subprocess installed post-installation script returned error exit status 127 Errors were encountered while processing: python-minimal E: Sub-process /usr/bin/dpkg returned an error code (1) any advice is appreciated!

    Read the article

< Previous Page | 617 618 619 620 621 622 623 624 625 626 627 628  | Next Page >