Search Results

Search found 25400 results on 1016 pages for 'enable manual correct'.

Page 535/1016 | < Previous Page | 531 532 533 534 535 536 537 538 539 540 541 542  | Next Page >

  • Is there any good hosting for asp.net and MySQL

    - by HAJJAJ
    HI every one ,I have account with one of the hosting company, and i did my project in asp.net and I used MySQL for the database. the hosting company is not giving me the full privileges to create new user or to create new stored procedure!!! this is what they said for me: Due to the shared nature of our environment we had to make some modifications to your procedure (namely the definer). We also had to review your procedure to determine if it would be compatible with our environment. While your procedures will work (via phpMyAdmin or some other interface), it is unlikely they will be accessible via the Connector/.NET (ADO.NET) that your application is likely using. This is due to a security restriction with how that connector works in shared environments. http://dev.mysql.com/doc/refman/5.0/en/connector-net-programming-stored.html "Note When you call a stored procedure, the command object makes an additional SELECT call to determine the parameters of the stored procedure. You must ensure that the user calling the procedure has the SELECT privilege on the mysql.proc table to enable them to verify the parameters. Failure to do this will result in an error when calling the procedure." Unfortunately, giving read privileges on the mysql.proc table will give you access to the data of our other customers and that is not an acceptable risk. If your application can only work using stored procedures, then MSSQL will probably be the better option for your site. I apologize for the inconvenience and the wait to have this ticket completed. So is there any good hosting that any body already used it to publish his asp.net and mysql project ??? this is one of my stored procedure and i think it's sample and it will not harm any other uses!!: -- -------------------------------------------------------------------------------- -- Routine DDL -- Note: comments before and after the routine body will not be stored by the server -- -------------------------------------------------------------------------------- DELIMITER $$ CREATE DEFINER=`root`@`localhost` PROCEDURE `SpcategoriesRead`( IN PaRactioncode VARCHAR(5), IN PaRCatID BIGINT, IN PaRSearchText TEXT ) BEGIN -- CREATING TEMPORARY TABLE TO SAVE DATA FROM THE ACTIONCODE SELECTS -- DROP TEMPORARY TABLE IF EXISTS TEMP; CREATE temporary table tmp ( CatID BIGINT primary key not null, CatTitle TEXT, CatDescription TEXT, CatTitleAr TEXT, CatDescriptionAr TEXT, PictureID BIGINT, Published BOOLEAN, DisplayOrder BIGINT, CreatedOn DATE ); IF PaRactioncode = 1 THEN -- Retrive all DATA from the database -- INSERT INTO tmp SELECT CatID,CatTitle,CatDescription,CatTitleAr,CatDescriptionAr,PictureID,Published,DisplayOrder,CreatedOn FROM tbcategories; ELSEIF PaRactioncode = 2 THEN -- Retrive all from the database By ID -- INSERT INTO tmp SELECT CatID,CatTitle,CatDescription,CatTitleAr,CatDescriptionAr,PictureID,Published,DisplayOrder,CreatedOn FROM tbcategories WHERE CatID=PaRCatID; ELSEIF PaRactioncode = 3 THEN -- NOSET YET -- INSERT INTO tmp SELECT CatID,CatTitle,CatDescription,CatTitleAr,CatDescriptionAr,PictureID,Published,DisplayOrder,CreatedOn FROM tbcategories WHERE Published=1 ORDER BY DisplayOrder; END IF; IF PaRSearchText IS NOT NULL THEN set PaRSearchText=concat('%', PaRSearchText ,'%'); SELECT CatID,CatTitle,CatDescription,CatTitleAr,CatDescriptionAr,PictureID,Published,DisplayOrder,CreatedOn FROM tmp WHERE Concat(CatTitle, CatDescription, CatTitleAr, CatDescriptionAr) LIKE PaRSearchText; ELSE SELECT CatID,CatTitle,CatDescription,CatTitleAr,CatDescriptionAr,PictureID,Published,DisplayOrder,CreatedOn FROM tmp; END IF; DROP TEMPORARY TABLE IF EXISTS tmp; END

    Read the article

  • State / Screen management in Entity Component Systems

    - by David Lively
    My entity/component system is happily humming along and, despite some performance concerns I initially had, everything is working fine. However, I've realized that I missed a crucial point when starting this thing: how do you handle different screens? At the moment, I have a GameManager class which owns a component manager and entity manager. When I create an entity, the entity manager assigns it an ID and makes sure it's tracked. When I modify the components that are assigned to an entity. an UpdateEntity method is called, which alerts each of the systems that they may need to add or remove the entity from their respective entity lists. A problem with this is that the collection of entities operated on by each system is determined solely by the individual Systems, typically based on a "required component" filter. (An entity has to have a Renderable component to be rendered, for instance.) In this situation, I can't just keep collections of entities per screen and only Update/Draw those collections. They'd have to either be added and removed depending on their applicability to the current screen, which would cause their associated components to be removed, or enable/disable entities in a group per screen to hide what's not supposed to be visible. These approaches seem like really, really crappy kludges. What's a good way to handle this? A pretty straightforward way that comes to mind is to create a separate GameManager (which in my implementation owns all of the systems, entities, etc.) per screen, which means that everything outside of the device context would be duplicated. That's bothersome because some things are always visible, or I might want to continue to display the game under a translucent menu window. Another option would be to add a "layer" key to the GameManager class, which could be checked against a displayable layer stack held by the game manager. *System.Draw() would be called for each active layer, in the required order as determined by the stack. When the systems request an iterator for their respective entity collections, it would be pre-filtered to a (cached) set of those entities that participate in the active layer. Those collections could be updated from the same UpdateEntity event that's already used to maintain each system's entity collections. Still, kinda feels like a hack. If I've coded myself into a corner, feel free to throw tomatoes as long as they're labeled with a helpful suggestion. Hooray for learning curves.

    Read the article

  • One Week To Go: OTN Architect Day: Cloud Computing

    - by Bob Rhubart
    One week remains until OTN Architect Day: Cloud Computing kicks of at the spectacular Oracle HQ campus in Redwood Shores, CA. The event is free, and there is still time to register. When: Tuesday July 9, 2013 8:30am - 12:30pm Where: Oracle Conference Center350 Oracle Pkwy Redwood City, CA 94065 Register now. It's free! Here's the latest update to the event agenda: 8:30am - 9:00am Registration and Continental Breakfast 9:00am - 9:45am Keynote 21st Century IT | Dr. James Baty VP, Global Enterprise Architecture Program, Oracle Imagine a time long, long ago. A time when servers were certified and dedicated to specific applications, when anything posted on an enterprise web site was from restricted, approved channels, and when we tried to limit the growth of 'dirty' data and storage. Today, applications are services running in the muti-tenant hybrid cloud. Companies beg their customers to tweet them, friend them, and publicly rate their products. And constantly analyzing a deluge of Internet, social and sensor data is the key to creating the next super-successful product, or capturing an evil terrorist. The old IT architecture was planned, dedicated, stable, controlled, with separate and well-defined roles. The new architecture is shared, dynamic, continuous, XaaS, DevOps. This keynote session describes the challenges and opportunities that the new business / IT paradigms present to the IT architecture and architects. 9:45am - 10:30am Technical Session Oracle Cloud: A Case Study in Building a Cloud | Anbu Krishnaswami Enterprise Architect, Oracle Building a Cloud can be challenging thanks to the complex requirements unique to Cloud computing and the massive scale typically associated with Cloud. Cloud providers can take an Infrastructure as a Service (IaaS) approach and build a cloud on virtualized commodity hardware, or they can take the Platform as a Service (PaaS) path, a service-oriented approach based on pre-configured, integrated, engineered systems. This presentation uses the Oracle Cloud itself as a case study in the use of engineered systems, demonstrating how the technical design of engineered systems is leveraged for building PaaS and SaaS Cloud services and a Cloud management infrastructure. The presentation will also explore the principles, patterns, best practices, and architecture views provided in Oracle's Cloud reference architecture. 10:30 am -10:45 am Break 10:45am-11:30am Technical Session Database as a Service | Markus Michalewicz Senior Principal Product Manager Oracle Real Application Clusters (RAC) New applications are now commonly built in a Cloud model, where the database is consumed as a service, and many established business processes are beginning to migrate to database as a service (DBaaS). This adoption of DBaaS is made possible by the availability of new capabilities in the database that enable resource pooling, dynamic resource management, model-based provisioning, metered use, and effective quality-of-service controls. This session will examine the catalog of database services at a large commercial bank to understand how these capabilities are enabling DBaaS for a wide range of needs within the enterprise. 11:30 am - 12:00 pm Panel Q&A Dr. James Baty, Anbu Krishnaswami, and Markus Michalewicz respond to audience questions. Registration is free, but seating is limited, so register now.

    Read the article

  • One Week To Go: OTN Architect Day: Cloud Computing

    - by Bob Rhubart
    One week remains until OTN Architect Day: Cloud Computing kicks of at the spectacular Oracle HQ campus in Redwood Shores, CA. The event is free, and there is still time to register. When: Tuesday July 9, 2013 8:30am - 12:30pm Where: Oracle Conference Center350 Oracle Pkwy Redwood City, CA 94065 Register now. It's free! Here's the latest update to the event agenda: 8:30am - 9:00am Registration and Continental Breakfast 9:00am - 9:45am Keynote 21st Century IT | Dr. James Baty VP, Global Enterprise Architecture Program, Oracle Imagine a time long, long ago. A time when servers were certified and dedicated to specific applications, when anything posted on an enterprise web site was from restricted, approved channels, and when we tried to limit the growth of 'dirty' data and storage. Today, applications are services running in the muti-tenant hybrid cloud. Companies beg their customers to tweet them, friend them, and publicly rate their products. And constantly analyzing a deluge of Internet, social and sensor data is the key to creating the next super-successful product, or capturing an evil terrorist. The old IT architecture was planned, dedicated, stable, controlled, with separate and well-defined roles. The new architecture is shared, dynamic, continuous, XaaS, DevOps. This keynote session describes the challenges and opportunities that the new business / IT paradigms present to the IT architecture and architects. 9:45am - 10:30am Technical Session Oracle Cloud: A Case Study in Building a Cloud | Anbu Krishnaswami Enterprise Architect, Oracle Building a Cloud can be challenging thanks to the complex requirements unique to Cloud computing and the massive scale typically associated with Cloud. Cloud providers can take an Infrastructure as a Service (IaaS) approach and build a cloud on virtualized commodity hardware, or they can take the Platform as a Service (PaaS) path, a service-oriented approach based on pre-configured, integrated, engineered systems. This presentation uses the Oracle Cloud itself as a case study in the use of engineered systems, demonstrating how the technical design of engineered systems is leveraged for building PaaS and SaaS Cloud services and a Cloud management infrastructure. The presentation will also explore the principles, patterns, best practices, and architecture views provided in Oracle's Cloud reference architecture. 10:30 am -10:45 am Break 10:45am-11:30am Technical Session Database as a Service | Markus Michalewicz Senior Principal Product Manager Oracle Real Application Clusters (RAC) New applications are now commonly built in a Cloud model, where the database is consumed as a service, and many established business processes are beginning to migrate to database as a service (DBaaS). This adoption of DBaaS is made possible by the availability of new capabilities in the database that enable resource pooling, dynamic resource management, model-based provisioning, metered use, and effective quality-of-service controls. This session will examine the catalog of database services at a large commercial bank to understand how these capabilities are enabling DBaaS for a wide range of needs within the enterprise. 11:30 am - 12:00 pm Panel Q&A Dr. James Baty, Anbu Krishnaswami, and Markus Michalewicz respond to audience questions. Registration is free, but seating is limited, so register now.

    Read the article

  • SQL query duplicating results [on hold]

    - by Ben
    I have written a query that results in data being retrieved for the top 5 customers in my table per account manager. Here is the query: SELECT account_manager_id, mgap_ska_id, total FROM (SELECT account_manager_id, mgap_ska_id, mgap_growth + mgap_recovery AS total, @grp_rank := IF(@current_accmanid = account_manager_id, @grp_rank + 1, 1) AS grp_rank, @current_accmanid := account_manager_id FROM mgap_orders ORDER BY total DESC ) ranked WHERE grp_rank <= 5 and here is the result of the query: account_manager_id mgap_ska_id total 159840 5062352 61569.21 159840 5062352 61569.21 159840 5062352 61569.21 159840 5062352 61569.21 159840 5062352 61569.21 160023 5024546 52244.29 160023 5024546 52244.29 160023 5024546 52244.29 160023 5024546 52244.29 160023 5024546 52244.29 159669 5323292 50126.38 159669 5323292 50126.38 159669 5323292 50126.38 159669 5323292 50126.38 159669 5323292 50126.38 As you can see the query is partially working as needed, except Im getting duplicates for mgap_ska_id whereas it should be five individual mgap_ska_id numbers. and here is a sample of my data: mgap_ska_id account_manager_id mgap_growth mgap_recovery 5057810 64154 0 1160.78 5178114 24456 0 5773.42 5292421 160338 0 5146.04 5414091 24408 0 104.14 5057810 64154 0 1160.78 Can anyone see where Ive gone wrong in my query and how/where I might correct the error so I get the 5 top individual customers (mgap_ska_id) instead of the duplicated top single customer?

    Read the article

  • How to analyze a scenario where a bug didn't get caught and adjust development workflow to prevent similar errors

    - by durron597
    I had a bug that was really difficult to track down, because all the unit tests were green, but the production application didn't work properly. Here's what happened: I had a filter class that set my application to ignore data that was not in some specified time windows. The unit test, which seemed thorough to me, turned green. Additionally, my integration tests also produced results as expected. Production, however, did not work. As a result of the first two bullets, this problem was very difficult to find. It turned out the problem was that my test dates were using my time zone (America/Chicago) but the production data was providing dates in UTC, which I did not realize, and the logic for the filter wasn't correct for UTC dates. (I was using joda time DateTime objects). Where did my workflow break down? Did I fail to produce a spec that specified that the logic needed to handle dates in any time zone? Did I fail to thoroughly consider all cases at the unit test level? Did I fail to insure the integration test was sufficiently similar to production? Other? What changes can I make to my workflow to better prevent this sort of mistake in the future? How can I more effectively debug a problem when there is an issue in production but not in testing?

    Read the article

  • Isolating test data in acceptance tests

    - by Matt Phillips
    I'm looking for guidance on how to keep my acceptance tests isolated. Right now the issue I'm having with being able to run the tests in parallel is the database records that are manipulated in the tests. I've written helpers that take care of doing inserts and deletes before tests are executed, to make sure the state is correct. But now I can't run them in parallel against the same database without uniquely generating the test data fields for each test. For example. Testing creating a row i'll delete everything where column A = foo and column B = bar Then I'll navigate through the UI in the test and create a record with column A = foo and column B = bar. Testing that a duplicate row is not allowed to be created. I'll insert a row with column A = foo and column B = bar and then use the UI to try and do the exact same thing. This will display an error message in the UI as expected. These tests work perfectly when ran separately and serially. But I can't run them at the same time for fear that one will create or delete a record the other is expecting. Any tips on how to structure them better so they can be run in parallel?

    Read the article

  • How do I use depth testing and texture transparency together in my 2.5D world?

    - by nbolton
    Note: I've already found an answer (which I will post after this question) - I was just wondering if I was doing it right, or if there is a better way. I'm making a "2.5D" isometric game using OpenGL ES (JOGL). By "2.5D", I mean that the world is 3D, but it is rendered using 2D isometric tiles. The original problem I had to solve was that my textures had to be rendered in order (from back to front), so that the tiles overlapped properly to create the proper effect. After some reading, I quickly realised that this is the "old hat" 2D approach. This became difficult to do efficiently, since the 3D world can be modified by the player (so stuff can appear anywhere in 3D space) - so it seemed logical that I take advantage of the depth buffer. This meant that I didn't have to worry about rendering stuff in the correct order. However, I faced a problem. If you use GL_DEPTH_TEST and GL_BLEND together, it creates an effect where objects are blended with the background before they are "sorted" by z order (meaning that you get a weird kind of overlap where the transparency should be). Here's some pseudo code that should illustrate the problem (incidentally, I'm using libgdx for Android). create() { // ... // some other code here // ... Gdx.gl.glEnable(GL10.GL_DEPTH_TEST); Gdx.gl.glEnable(GL10.GL_BLEND); } render() { Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT); Gdx.gl.glBlendFunc(GL10.GL_SRC_ALPHA, GL10.GL_ONE_MINUS_SRC_ALPHA); // ... // bind texture and create vertices // ... } So the question is: How do I solve the transparency overlap problem?

    Read the article

  • Network printer - Print direct or via shared printer on Server?

    - by NickC
    It has occurred to me that a workstation can connect to a printer in two ways: 1). Printing directly to the IP of the printer with the print driver installed locally. 2). Printing to a \Server\Printer1 share with the print queue residing on the server. Question is which way is preferred? I would assume that printing directly to a network printer rather than going through the server would be the most efficient from the point of view of network traffic. On the other hand I guess a server printer share would be easier to manage with the correct driver automatically being downloaded to the workstations. Also what about using GPP (Server2012) to install this printer on the workstations, does that require any specific way?

    Read the article

  • Using the right folder for the right job. Article link, please?

    - by Droogans
    There are specific folders designed for specific tasks. /var/www holds your web sites, /usr/bin contains files to run your applications...yet I still find myself putting nearly all of my work in ~. Is it possible to overuse my home directory? Will it come back to haunt me? Anyone have a good link to an article of best practices for organizing your files so that they are placed in their "correct" place? Is there even such a thing in Linux? I am referring specifically to user-generated content. I do not compile applications from source, I use apt-get for those tasks. This article has a great introduction to what I'm looking for. Table 3-2, "Subdirectories of the root directory" is the sort of thing I'm looking for, but with more details/examples.

    Read the article

  • Storing large amounts of small files into bigger files on Windows

    - by asmo
    Let's say I have 50 GiB of files that weights around 500 KiB each. My guess is that having, for example, 5 large files of 10 GiB each with the same content archived in them would be better for hard drive performance. Am I correct? Will there be a noticeable gain on an NTFS filesystem? ===================================================================== Finally, which tool could I use to group the files together while retaining the ability to modify the content of the archive with zero or minor performance loss? For example, I like TrueCrypt archiving because after mounting an archive file, it creates a drive which I can use seamlessly as if it was a normal drive. The only thing with TrueCrypt is that I don't need encryption/compression, only archiving.

    Read the article

  • ubuntu 13.04 recognizes usb mobile broadband modem as ethernet connection

    - by Bence Mihalka
    When I plug in my usb mobile broadband modem (ZTE MF-667), in the network manager instead of a mobile broadband connection, I get an ethernet connection, called: Ethernet Network (ZTE WCDMA Technologies MSM), which of course doesn't work. Here is my lsusb output and the relevant parts of dmesg output: lsusb: Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 001 Device 003: ID 0cf3:3005 Atheros Communications, Inc. AR3011 Bluetooth Bus 001 Device 004: ID 04f2:b1b9 Chicony Electronics Co., Ltd Asus Integrated Webcam Bus 001 Device 005: ID 058f:6366 Alcor Micro Corp. Multi Flash Reader Bus 002 Device 004: ID 19d2:1405 ZTE WCDMA Technologies MSM dmesg: [ 195.328467] usb 2-1.1: new high-speed USB device number 3 using ehci-pci [ 195.423545] usb 2-1.1: New USB device found, idVendor=19d2, idProduct=1225 [ 195.423555] usb 2-1.1: New USB device strings: Mfr=3, Product=2, SerialNumber=4 [ 195.423561] usb 2-1.1: Product: ZTE WCDMA Technologies MSM [ 195.423567] usb 2-1.1: Manufacturer: ZTE,Incorporated [ 195.423572] usb 2-1.1: SerialNumber: P680A1ZTED000000 [ 195.426319] scsi7 : usb-storage 2-1.1:1.0 [ 196.425354] scsi 7:0:0:0: CD-ROM CWID USB SCSI CD-ROM 2.31 PQ: 0 ANSI: 2 [ 197.447919] usb 2-1.1: USB disconnect, device number 3 [ 197.457582] sr0: scsi3-mmc drive: 243x/186x xa/form2 cdda pop-up [ 197.457594] cdrom: Uniform CD-ROM driver Revision: 3.20 [ 197.459058] sr 7:0:0:0: Attached scsi CD-ROM sr0 [ 197.459483] sr 7:0:0:0: Attached scsi generic sg2 type 5 [ 197.759186] usb 2-1.1: new high-speed USB device number 4 using ehci-pci [ 197.854543] usb 2-1.1: New USB device found, idVendor=19d2, idProduct=1405 [ 197.854556] usb 2-1.1: New USB device strings: Mfr=4, Product=3, SerialNumber=5 [ 197.854564] usb 2-1.1: Product: ZTE WCDMA Technologies MSM [ 197.854572] usb 2-1.1: Manufacturer: ZTE,Incorporated [ 197.854579] usb 2-1.1: SerialNumber: P680A1ZTED010000 [ 197.957739] scsi8 : usb-storage 2-1.1:1.2 [ 198.076554] cdc_ether 2-1.1:1.0 eth1: register 'cdc_ether' at usb-0000:00:1d.0-1.1, CDC Ethernet Device, 00:a0:c6:00:00:00 [ 198.076583] usbcore: registered new interface driver cdc_ether [ 198.955985] scsi 8:0:0:0: CD-ROM CWID USB SCSI CD-ROM 2.31 PQ: 0 ANSI: 2 [ 198.956797] scsi 8:0:0:1: Direct-Access ZTE MMC Storage 2.31 PQ: 0 ANSI: 2 I created the appropriate mobile broadband connection manually, but I cannot enable it in network manager, since the device is not recognized as mobile broadband. Any tips how to make it work?

    Read the article

  • Commerce Anywhere...Where the Web, Store, Mobile, Social and Call Center Come Together

    - by divya.malik
    I am pleased to introduce guest blogger, Bill Zujewski today. Bill has just joined the Oracle CRM Product Marketing team as part of our recent ATG acquisition. Based in Cambridge, MA Bill was the VP of Product Marketing for ATG and collaborated on eCommerce strategy with some of the best brands in the world. Welcome Bill!! BY BILL ZUJEWSKI "Times are a changing"...or so the song goes. Not long ago, eCommerce just meant having a cool brand and a slick website. Today, customers expect much more... what I think they really want...Commerce Anywhere...a seamless, consistent and personal way to interact or transact business with you and your products, whether they start on the web, go into a store, talk over the phone, access products via their mobile device or on their favorite social media site. They want one more thing... for you to remember them and their history with you... so they can be treated more intelligently and not have to repeat previous interactions. It makes sense to me, I want it too... it saves me time and money. I work with many companies that are trying to understand how to evolve their business structure and technology solutions to meet the challenges of Commerce Anywhere. My advice ... think differently and take a more holistic approach to the customer experience and the cross-channel selling solution. Stop integrating siloed legacy systems and start thinking about a single platform as your new foundation... the e-Commerce platform. I recently wrote a new white paper, Commerce Anywhere - A Business and Technology ! Strategy to Maximize Cross- channel Commerce Growth to help our customers better understand how to create that "Commerce Anywhere" customer experience that customers really want. The paper offers practical insights into an IT transformation that can help you leverage a commerce platform to go beyond the web store front and instead use it to enable rapid expansion into mobile apps, new in-store apps, and interact with your customers through social commerce. Let me know what you think by posting a comment on this blog.

    Read the article

  • Orthographic unit translation mismatch on grid (e.g. 64 pixels translates incorrectly)

    - by Justin Van Horne
    I am looking for some insight into a small problem with unit translations on a grid. Setup 512x448 window 64x64 grid gl_Position = projection * world * position; projection is defined by ortho(-w/2.0f, w/2.0f, -h/2.0f, h/2.0f); This is a textbook orthogonal projection function. world is defined by a fixed camera position at (0, 0) position is defined by the sprite's position. Problem In the screenshot below (1:1 scaling) the grid spacing is 64x64 and I am drawing the unit at (64, 64), however the unit draws roughly ~10px in the wrong position. I've tried uniform window dimensions to prevent any distortion on the pixel size, but now I am a bit lost in the proper way in providing a 1:1 pixel-to-world-unit projection. Anyhow, here are some quick images to aide in the problem. I decided to super-impose a bunch of the sprites at what the engine believes is 64x offsets. When this seemed off place, I went about and did the base case of 1 unit. Which seemed to line up as expected. The yellow shows a 1px difference in the movement. Vertices It would appear that the vertices going into the vertex shader are correct. For example, in reference to the first image the data looks like this in the VBO: x y x y ---------------------------- tl | 0.0 24.0 64.0 24.0 bl | 0.0 0.0 -> 64.0 0.0 tr | 16.0 0.0 80.0 0.0 br | 16.0 24.0 80.0 24.0 With that said, all I am left to believe is that I am munging up my actual projection. So, I am looking for any insight into maintaining the 1:1 pixel-to-world-unit projection.

    Read the article

  • Exchange 2007 - How to use export-mailbox function?

    - by Khalid Rahaman
    I am trying to use the Export-Mailbox cmdlet to export a mailbox to PST file but i get an error that i'm running on a 64bit machine and must use a 32 bit etc etc.. I have a Windows 7 Pro 32bit PC joined to the domain with the exchange server, and outlook 32bit installed. When i try to install Exchange 2007 32bit management console only, i'm told that i can't install the managemnet tools on a Windows 7 PC. Can someone please advise if this is correct setup to be able to run the export-mailbox function to dump the mailboxes into PST files. Thank you

    Read the article

  • Windows XP runs New Hardware Wizard for usb keyboard and mouse, can't find drivers

    - by Randy Orrison
    I have a PC that up until a couple days ago was working fine. I moved it from one site to another and now when I plug in the USB mouse or keyboard (the same ones that were working previously) XP brings up the New Hardware Wizard. Going through it, the correct keyboard and mouse are identified, but XP can't find the drivers. I've tried manually searching for the driver (using the Have Disk option) - the first file it's looking for is in the c:\i386 directory, but that installs a generic HID mouse device; the system then runs the hardware wizard for a new "unknown" device. The system was SP2, I have installed SP3 in hopes that would help, and I've also downloaded and installed the mouse drivers from Dell's site (there are no specific drivers for the keyboard), with no change. Before I completely reinstall XP, is there anything else I should try?

    Read the article

  • Will BIOS boot mode Ubuntu install be able to boot when firmware "Fast Boot" is "Ultra Fast"?

    - by Pro Backup
    I have an AsRock mainboard with UEFI BIOS P1.50 02/14/2014. The firmware "Fast Boot" option is set to "Fast", Boot Option #1 is set to "AHCI P4: OCZ-VERT...": this is BIOS not UEFI boot. This boot disk has an MBR partitioning scheme (# parted -l | grep Partition\ Table:). Therefore Ubuntu 14.04 is installed in BIOS/CMS (Grub-PC) mode. The Ubuntu boot process ends in a text console (no GUI). There is no external graphics card in use. The stock Ubuntu kernel is replaced with Ubuntu supplied mainline 3.16.0-031600rc6-generic. dmesg outputs lines containing BIOS, like: SMBIOS 2.7 present Calgary: detecting Calgary via BIOS EBDA area Calgary: Unable to locate Rio Grande table in EBDA - bailing! [Firmware Bug]: ACPI: BIOS _OSI(Linux) query ignored BIOS EDD facility v0.16 2004-Jun-25, 0 devices found The ASRock BIOS it selves display this help text for "Ultra Fast - Fast Boot": Ultra Fast mode is only supported by Windows 8 and the VBIOS must support UEFI GOP if you are using an external graphics card. Please notice that Ultra Fast mode will boot so fast that the only way to enter this UEFI Setup Utility is to Clear CMOS or run the Restart to UEFI utility in Windows. Assumptions: I suspect after changing UEFI setting "Fast Boot" to "Ultra Fast" that the machine will no longer boot into Ubuntu's console. I expect when first exchanging "Grub-pc" with "Grub-efi", that the machine will still be able to boot to a grub menu (thus allowing to change the "Fast Boot" setting back to "Fast" without clearing CMOS). Are these two "Fast Boot" assumptions correct, and/or, may I expect Ubuntu 14.04 running mainline kernel 3.16rc6 and Grub-efi to still boot to console after enabling UEFI Ultra Fast Boot?

    Read the article

  • how do I get dual monitors to work properly in Ubuntu 11.10 on a Dell Latitude D630?

    - by wes cook
    I have spent a lot of time trying to get dual monitors to work on Ubuntu 11.10 on my Dell Latitude D630 (nVidia NVS 135m video card). - For starters, the System Displays settings app always only showed one unknown monitor, even though I had the external Acer monitor connected. - So I downloaded and installed the nVidia drivers. According to what I read I would need to only use the nVidia driver app (nVidia X Server Settings), so that's what I've done. (System Displays settings continued to only show a single monitor anyway). - nVidia settings app only showed on monitor until I changed the BIOS setting to use the onboard video for external monitor (not the dock video, which it was set to, even though I don't have a docking station). - The nVidia setting app now recognized both monitors. So, I setup the X Server display config as Separate X screen for both monitors. My laptop screen shows up as AUO 1440x900 and my external monitor as Acer E211H 1920x1080. - Everything seemed like it would work, but the external monitor was just a complete white screen. The external monitor was non-functional, even though sometimes it would show the background image - still nothing would show up over there. - So, I checked the Enable Xinerama box. - Now, after logging out and back in, the wallpaper extends to both screens but I get no taskbar at the bottom or top, no system menus, and I have to press the power button to restart or log off. - After experimenting with all the shells, the only one that shows the menus and taskbars when I log in is Gnome Classic. - This is pretty much the same symptoms as found here: How do I fix 11.10 GUI?. - So, I resign myself to the older shell. - Everything works fine until ... I unplug the external monitor ... this is a laptop after all. - Anyway, after doing some work on the road, I plug back in and I still see both screens and it's functional except, ... - Now, the laptop screen (with the taskbar and menu bar) has 4 black bars at the top that windows cannot cover. The top bar is the menu bar (with Applications, Places, the date and time and the system menu on the right). But the next 3 bars (the same height as the top menu bar) are empty and are just reducing the max size of windows on that screen. - See screenshot here: http://i39.tinypic.com/35d2kh1.png - So ... 1. How do I get rid of those extra 3 black bars? They're taking valuable screen space. 2. (less critical) How do I successfully use both screens in the Ubuntu or Ubuntu 2D shell?

    Read the article

  • ODSI + weblogic = JDBC problem

    - by Giuseppe Di Federico
    I'm currently developing a web service using ODSI through Oracle Workshop for WebLogic (ex AquaLogic). I created a datasource on weblogic using the driver "Oracle thin driver 10g", the test succeed on WebLogic. (My Database is Oracle 10 10.2.0.1.0) The problem occours when I try to create the Phisical Data Service in the Oracle Workshop. I choose the following options: Data source type = Relational Data source = [THE CORRECT NAME OF THE SOURCE SET ON WEBLOGIC] Database type = ??? Aqualogic, doesn't allow me to select the database type. I guess is a problem related to the driver set on weblogic... but I ain't sure.Does someone know the nature of my problem ? Tnx

    Read the article

  • Hard link not works under MacOS in GUI mode

    - by AntonAL
    Hi, i faced a little strange behavior, while using hard links. From terminal, i create a text file 1.txt and a hard link "to this file" nano 1.txt mkdir dir ln 1.txt ./dir/ I check the resulting hard link and see that its contents are the same as of the original file. less ./dir/1.txt I change the initial file ... nano 1.txt ... and see, that changes was reflected in hard-link less ./dir/1.txt I change content of hard-link (more correct, of course - file, being referenced with hard-link) ... nano ./dir/1.txt ... and see, that changes are reflected in initial file less 1.txt Until now, all going well... Now, I close terminal and start playing with created files (1.txt and ./dir/1.txt) from Finder. When i change on this two files with TextEdit, changes are not reflected in another file. Just like the hard link was teared off... What is going on here ?

    Read the article

  • Send all traffic over VPN connection not working Windows VPN host

    - by Adam Schiavone
    I am trying to get a mac (10.8) to connect to thru vpn to a server running Windows Server 2008 R2 pass all requests from the mac to the server. The VPN is setup and I can connect and access the server thru a web browser, but for all other sites, the DNS lookup fails. I have tried adding a DNS server to the VPN Host. ex. Lets say the the VPN server also hosts a website example.com. I connect to the VPN with my mac and point a browser to example.com and everything works fine. but when I point the browser to google.com it just sits there and will eventually come back with a DNS lookup failed message. HOWEVER: I tried running the command dig @myServersIpHere www.google.com. on the mac and it comes back with correct IP addresses. I really dont know what to do from here. How can I route all requests from my mac, thru my windows server via VPN?

    Read the article

  • Precising definition of programming paradigm

    - by Kazark
    Wikipedia defines programming paradigm thus: a fundamental style of computer programming which is echoed in the descriptive text of the paradigms tag on this site. I find this a disappointing definition. Anyone who knows the words programming and paradigm could do about that well without knowing anything else about it. There are many styles of computer programming at many level of abstraction; within any given programming paradigm, multiple styles are possible. For example, Bob Martin says in Clean Code (13), Consider this book a description of the Object Mentor School of Clean Code. The techniques and teachings within are the way that we practice our art. We are willing to claim that if you follow these teachings, you will enjoy the benefits that we have enjoyed, and you will learn to write code that is clean and professional. But don't make the mistake of thinking that we are somehow "right" in any absolute sense. Thus Bob Martin is not claiming to have the correct style of Object-Oriented programming, even though he, if anyone, might have some claim to doing so. But even within his school of programming, we might have different styles of formatting the code (K&R, etc). There are many styles of programming at many levels. Sp how can we define programming paradigm rigorously, to distinguish it from other categories of programming styles? Fundamental is somewhat helpful, but not specific. How can we define the phrase in a way that will communicate more than the separate meanings of each of the two words—in other words, how can we define it in a way that will provide additional meaning for someone who speaks English but isn't familiar with a variety of paradigms?

    Read the article

  • Mutt and msmtp interoperability

    - by illusionoflife
    I am working on configuring /mutt/ to send mail via /msmtp/. Strangely, if I user /msmtp/ from shell, all okay, that means, that .msmtprc is correct. However, mail sent with mutt do not come. I have this line in .muttrc. set sendmail="msmtp" How can I debug this problem? EDIT: I found, that if I send just text, like msmtp 'my-email' <<< "Hello", it works. But if I send fully builded email-header, it do not. Is it gmail politics or what?

    Read the article

  • Problem upgrading kernel on debian 3.1

    - by exhuma
    Hi, I have a quite old box in a remote server farm. So I have no direct access. Only remote SSH (and via SSH to a serial console). I haven't updated this box in ages. Now, whenever I want to install a new package, a dependency to glibc appears. Unfortunately, the install of glibc depends on a 2.6 kernel and I am running a venerable 2.4 kernel (one more reason to upgrade). The problem is, that the install of a new kernel has an indirect (over locales) dependency to glibc. So, to install glibc, I need a new kernel. For a new kernel, I need to upgrade glibc. Essentially I am blocked. What's the best way to proceed considering I have no "hardware" access? Here's a quick transcript of the upgrade process: [green:~]% sudo aptitude install linux-image-686 Reading Package Lists... Done Building Dependency Tree Reading extended state information Initializing package states... Done Reading task descriptions... Done The following packages are unused and will be REMOVED: gcc-4.3-base The following NEW packages will be automatically installed: dash libc6-i686 libparse-recdescent-perl linux-image-2.6-686 linux-image-2.6.18-6-686 module-init-tools yaird The following packages have been kept back: adduser apache2 apache2-mpm-prefork apache2-utils apache2.2-common apt apt-utils aptitude autoconf autotools-dev awstats base-files base-passwd [...snip...] util-linux vacation vim vim-common wamerican wbritish wget whiptail whois wwwconfig-common zlib1g The following NEW packages will be installed: dash libc6-i686 libparse-recdescent-perl linux-image-2.6-686 linux-image-2.6.18-6-686 linux-image-686 module-init-tools yaird The following packages will be upgraded: hotplug libc6 2 packages upgraded, 8 newly installed, 1 to remove and 277 not upgraded. Need to get 0B/22.7MB of archives. After unpacking 52.1MB will be used. Do you want to continue? [Y/n/?] Writing extended state information... Done Preconfiguring packages ... (Reading database ... 34065 files and directories currently installed.) Preparing to replace libc6 2.3.6.ds1-13 (using .../libc6_2.7-18lenny2_i386.deb) ... Checking for services that may need to be restarted... Checking init scripts... WARNING: init script for postgresql not found. [ --- libc6 config screen appears here --- ] WARNING: POSIX threads library NPTL requires kernel version 2.6.8 or later. If you use a kernel 2.4, please upgrade it before installing glibc. The installation of a 2.6 kernel _could_ ask you to install a new libc first, this is NOT a bug, and should *NOT* be reported. In that case, please add etch sources to your /etc/apt/sources.list and run: apt-get install -t etch linux-image-2.6 Then reboot into this new kernel, and proceed with your upgrade dpkg: error processing /var/cache/apt/archives/libc6_2.7-18lenny2_i386.deb (--unpack): subprocess pre-installation script returned error exit status 1 Errors were encountered while processing: /var/cache/apt/archives/libc6_2.7-18lenny2_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1) Ack! Something bad happened while installing packages. Trying to recover: dpkg: dependency problems prevent configuration of locales: locales depends on glibc-2.7-1; however: Package glibc-2.7-1 is not installed. dpkg: error processing locales (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: locales Reading Package Lists... Done Building Dependency Tree Reading extended state information Initializing package states... Done Reading task descriptions... Done Now, if I follow the instrunctions as promted I get the following. Note that I am using aptitude instead of apt-get to benefit from the better dependency tracking. I did try with apt-get first. But that let me to the same problem. [green:~]% sudo aptitude install -t etch linux-image-2.6.26-2-686 Reading Package Lists... Done Building Dependency Tree Reading extended state information Initializing package states... Done Reading task descriptions... Done E: Unable to correct problems, you have held broken packages. E: Unable to correct dependencies, some packages cannot be installed E: Unable to resolve some dependencies! Some packages had unmet dependencies. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following packages have unmet dependencies: linux-image-2.6.26-2-686: Depends: initramfs-tools (>= 0.55) but it is not installable or yaird (>= 0.0.13) but it is not installable or linux-initramfs-tool which is a virtual package. Any ideas?

    Read the article

  • netsh wlan add profile not imported passphrase

    - by sirlancelot
    I exported a wireless network connection profile from a Windows 7 machine correctly connected to a WiFi network with a WPA-TKIP passphrase. The exported xml file shows the correct settings and a keyMaterial node which I can only guess is the encrypted passphrase. When I take the xml to another Windows 7 computer and import it using netsh wlan add profile filename="WiFi.xml", it correctly adds the profile's SSID and encryption type, but a balloon pops up saying that I need to enter the passphrase. Is there a way to import the passphrase along with all other settings?

    Read the article

< Previous Page | 531 532 533 534 535 536 537 538 539 540 541 542  | Next Page >