Search Results

Search found 57327 results on 2294 pages for 'nested set'.

Page 606/2294 | < Previous Page | 602 603 604 605 606 607 608 609 610 611 612 613  | Next Page >

  • Resolution independence - resize on the fly or ship all sizes?

    - by RecursiveCall
    My game relies heavily on textures of various sizes with some being full-screen. The game is targeted for multiple resolutions. I found that resizing textures (downsizing) works quite well for this game’s art type (it’s not Pixel Art or anything like that). I asked my artist to ensure that all textures at the edges of the screen to be created in such a way that they can safely “overflow” off screen; this means that aspect ratio is not an issue. So with no aspect ratio issues, I figured that I would simply ask my artist to create assets in very high resolution, and then resize them down to the appropriate screen resolution. The question is, when and how do I do that? Do I pre-resize everything to common resolutions in Photoshop and package all assets in the final product (increasing the size download that the user has to deal with) and then select the appropriate asset based on the detected resolution? Or do I ship with the largest set of Textures, detect the resolution on load, set a render target and draw all downsized assets to it and use that? Or for the latter, do I use some sort of a CPU-sided algorithm to resize on game load?

    Read the article

  • REST and redirecting the response

    - by Duane Gran
    I'm developing a RESTful service. Here is a map of the current feature set: POST /api/document/file.jpg (creates the resource) GET /api/document/file.jpg (retrieves the resource) DELETE /api/document/file.jpg (removes the resource) So far, it does everything you might expect. I have a particular use case where I need to set up the browser to send a POST request using the multipart/form-data encoding for the document upload but when it is completed I want to redirect them back to the form. I know how to do a redirect, but I'm not certain about how the client and server should negotiate this behavior. Two approaches I'm considering: On the server check for the multipart/form-data encoding and, if present, redirect to the referrer when the request is complete. Add a service URI of /api/document/file.jpg/redirect to redirect to the referrer when the request is complete. I looked into setting an X header (X-myapp-redirect) but you can't tell the browser which headers to use like this. I manage the code for both the client and the server side so I'm flexible on solutions here. Is there a best practice to follow here?

    Read the article

  • Is there any reason not to go directly from client-side Javascript to a database?

    - by Chris Smith
    So, let's say I'm going to build a Stack Exchange clone and I decide to use something like CouchDB as my backend store. If I use their built-in authentication and database-level authorization, is there any reason not to allow the client-side Javascript to write directly to the publicly available CouchDB server? Since this is basically a CRUD application and the business logic consists of "Only the author can edit their post" I don't see much of a need to have a layer between the client-side stuff and the database. I would simply use validation on the CouchDB side to make sure someone isn't putting in garbage data and make sure that permissions are set properly so that users can only read their own _user data. The rendering would be done client-side by something like AngularJS. In essence you could just have a CouchDB server and a bunch of "static" pages and you're good to go. You wouldn't need any kind of server-side processing, just something that could serve up the HTML pages. Opening my database up to the world seems wrong, but in this scenario I can't think of why as long as permissions are set properly. It goes against my instinct as a web developer, but I can't think of a good reason. So, why is this a bad idea? EDIT: Looks like there is a similar discussion here: Writing Web "server less" applications EDIT: Awesome discussion so far, and I appreciate everyone's feedback! I feel like I should add a few generic assumptions instead of calling out CouchDB and AngularJS specifically. So let's assume that: The database can authenticate users directly from its hidden store All database communication would happen over SSL Data validation can (but maybe shouldn't?) be handled by the database The only authorization we care about other than admin functions is someone only being allowed to edit their own post We're perfectly fine with everyone being able to read all data (EXCEPT user records which may contain password hashes) Administrative functions would be restricted by database authorization No one can add themselves to an administrator role The database is relatively easy to scale There is little to no true business logic; this is a basic CRUD app

    Read the article

  • XAMPP - Unable to serve files larger than ~30MB [on hold]

    - by Sparx401
    I'm developing a site locally with XAMPP on Windows 7, and as far as media is concerned, I'm unable to play media files that are larger than 30MB or so. Both video and audio files (MP4 and MP3 respectively) generate this error in Chrome (and show similar errors in other browsers such as IE9 and Opera): No data received Unable to load the webpage because the server sent no data. Error 324 (net::ERR_EMPTY_RESPONSE): The server closed the connection without sending any data. It seems that the exact number of MB somewhat varies between browsers though. One video in question is 34MB and actually plays in Opera and IE9, but gives the aforementioned error in Chrome. I've checked to make sure the file paths were typed correctly and ensured that the directive for .htaccess is there to serve MP4s: AddType video/mp4 mp4 Also, I have these directives set as well in the same .htaccess file: php_value upload_max_filesize "80M" php_value post_max_size "80M" php_value max_input_time 60 php_value max_execution_time 60 And memory_limit is set in php.ini as "128M" so I'm left wondering: what is causing my files to not play, and what, if any, directives I have to change on the server-side? Perhaps something to do with limitations with the GET method (the method I'm seeing on Chrome's network tab among other header request/response info)?

    Read the article

  • Cron not able to succesfully change background

    - by Solenoid
    I'm running 12.04 with a custom XML background (modification on Day of Ubuntu) that changes based on time of day. I've noticed that there's a significant delay between when the changes are scheduled to take place in the XML file and when they actually show up on the background. I've also noticed that when I resume from suspend I don't get the correct background image either. I've found that cycling the wallpaper manually will fix this, and I've written a script to automate the process. If I execute the script manually it works fine. However, when I schedule the script to run in cron, cron doesn't change the background. To make sure that the script was being run properly by cron, I had it create a directory in my home folder after running the background change, and the directory is created successfully, so I know cron is running and executing the script. My script: #!/bin/bash sleep 5 gsettings set org.gnome.desktop.background picture-uri file:///home/zak/Pictures/Wallpaper/DOU2.xml sleep 1 gsettings set org.gnome.desktop.background picture-uri file:///home/zak/Pictures/Wallpaper/DOU.xml sleep 1 mkdir /home/zak/iscronworking exit Is cron just not able to access gsettings? The job is on my user crontab so it shouldn't be running as root. Alternately, is there any way to make Precise play nicely with XML wallpaper?

    Read the article

  • Optimizing MySQL -

    - by Josh
    I've been researching how to optimize MySQL a bit, but I still have a few questions. MySQL Primer Results http://pastie.org/private/lzjukl8wacxfjbjhge6vw Based on this, the first problem seems to be that the max_connections limit is too low. I had a similar problem with Apache initially, the max connection limit was set to 100, and the web server would frequently lock up and take an excruciatingly long time to deliver pages. Raising the connection limit to 512 fixed this issue, and I read that raising the connection limit on MySQL to match this was considered good practice. Being that MySQL has actually been "locking up" recently as well (connections have been refused entirely for a few minutes at a time at random intervals) I'm assuming this is the main cause of the issue. However, as far as table cache goes, I'm not sure what I should set this as. I've read that setting this too high can hinder performance further, so should I raise this to right around 551, 560, 600, or do something else? Lastly, as far as raising the join_buffer_size value goes, this doesn't even seem to be included in Debian's my.cnf file by default. Assuming there's not much I can do about adding indexes, should I look into raising this? Any suggested values? Any suggestions in general here would be appreciated as well. Edit: Here's the number of open tables the MySQL server is reporting. I believe this value is related to my question (Opened_tables: 22574)

    Read the article

  • Hiding the Flash Message After a Time Delay

    - by Madhan ayyasamy
    Hi Friends,The flash hash is a great way to provide feedback to your users.Here is a quick tip for hiding the flash message after a period of time if you don’t want to leave it lingering around.First, add this line to the head of your layout to ensure the prototype and script.aculo.us javascript libraries are loaded:Next, add the following to either your layout (recommended), your view templates or a partial depending on your needs. I usually add this to a partial and include the partial in my layouts. "flash", :id = flash_type % "text/javascript" do % setTimeout("new Effect.Fade('');", 10000); This will wrap the flash message in a div with class=‘flash’ and id=‘error’, ‘notice’ or ‘warn’ depending on the flash key specified.The value ‘10000’ is the time in milliseconds before the flash will disappear. In this case, 10 seconds.This function looks pretty good and little javascript stunts like this can help make your site feel more professional. It’s also worth bearing in mind though, not everybody can see well or read as quickly as others so this may not be suitable for every application.Update:As Mitchell has pointed out (see comments below), it may be better to set the flash_type as the div class rather than it’s id. If there is the possibility that you’ll be showing more than one flash message per page, setting the flash_type as the div id will result in your HTML/XHTML code becoming invalid because the unique intentifier will be used more than once per page.Here is a slightly more complex version of the method shown above that will hide all divs with class ‘flash’ after a time delay, achieving the same effect and also ensuring your code stays valid with more than one flash message! "flash #{flash_type}" % "text/javascript" do % setTimeout("$$('div.flash').each(function(flash){ flash.hide();})", 10000); In this example, the div id is not set at all. Instead, each flash div will have class “div” and also class of the type of flash message (“error”, “warning” etc.).Have a Great Day..:)

    Read the article

  • Rotation of bitmap using a frame by frame animation

    - by pengume
    Hey every one I know this has probably been asked a ton of times but I just wanted to clarify if I am approaching this correctly, since I ran into some problems rotating a bitmap. So basically I have one large bitmap that has four frames drawn on it and I only draw one at a time by looping through the bitmap by increments to animate walking. I can get the bitmap to rotate correctly when it is not moving but once the animation starts it starts to cut off alot of the image and sometimes becomes very fuzzy. I have tried public void draw(Canvas canvas,int pointerX, int pointerY) { Matrix m; if (setRotation){ // canvas.save(); m = new Matrix(); m.reset(); // spriteWidth and spriteHeight are for just the current frame showed m.setTranslate(spriteWidth / 2, spriteHeight / 2); //get and set rotation for ninja based off of joystick m.preRotate((float) GameControls.getRotation()); //create the rotated bitmap flipedSprite = Bitmap.createBitmap(bitmap , 0, 0,bitmap.getWidth(),bitmap.getHeight() , m, true); //set new bitmap to rotated ninja setBitmap(flipedSprite); // canvas.restore(); Log.d("Ninja View", "angle of rotation= " +(float) GameControls.getRotation()); setRotation = false; } And then the Draw Method here // create the destination rectangle for the ninjas current animation frame // pointerX and pointerY are from the joystick moving the ninja around destRect = new Rect(pointerX, pointerY, pointerX + spriteWidth, pointerY + spriteHeight); canvas.drawBitmap(bitmap, getSourceRect(), destRect, null); The animation is four frames long and gets incremented by 66 (the size of one of the frames on the bitmap) for every frame and then back to 0 at the end of the loop.

    Read the article

  • Defaults for Exporting Data in Oracle SQL Developer

    - by thatjeffsmith
    I was testing a reported bug in SQL Developer today – so the bug I was looking for wasn’t there (YES!) but I found a different one (NO!) – and I was getting frustrated by having to check the same boxes over and over again. What I wanted was INSERT STATEMENTS to the CLIPBOARD. Not what I want! I’m always doing the same thing, over and over again. And I never go to FILE – that’s too permanent for my type of work. I either want stuff to the clipboard or to the worksheet. Surely there’s a way to tell SQL Developer how to behave? Oh yeah, check the preferences So you can set the defaults for this dialog. Go to: Tools – Preferences – Database – Utilities – Export Now I will always start with ‘INSERT’ and ‘Clipboard’ – woohoo! Now, I can also go INTO the preferences for each of the different formats to save me a few more clicks. I prefer pointy hats (^) for my delimiters, don’t you? So, spend a few minutes and set each of these to what you’re normally doing and save yourself a bunch of time going forward.

    Read the article

  • Mixing XNA and silverlight gives wierd graphics

    - by Mech0z
    I making a small 3dgame which is made as a Silverlight and XNA app, but when I draw the sprites the graphics becomes all wierd. All my primitive types are rendered correctly, but my 3d models are just wierd My Draw is like this when silverlight is set to draw private void OnDraw(object sender, GameTimerEventArgs e) { // Render the Silverlight controls using the UIElementRenderer elementRenderer.Render(); // Clear the screen to a solid color SharedGraphicsDeviceManager.Current.GraphicsDevice.Clear(Color.CornflowerBlue); switch (gameState) { case GameState.ChooseStarter: TextBlockStatus.Text = "Find Starting Player"; break; case GameState.PlaceBrick: TextBlockPlayer.Text = (playerTurn == PlayerTurn.PlayerOne) ? "Player One" : "Player Two"; TextBlockState.Text = "Place Brick"; foreach (IGraphicObject obj in _3dObjects) { obj.Draw(cameraPosition, e); } break; case GameState.GiveBrick: TextBlockState.Text = "Give Brick"; break; } spriteBatch.Begin(); // Using the texture from the UIElementRenderer, // draw the Silverlight controls to the screen spriteBatch.Draw(elementRenderer.Texture, cameraProjection, Color.White); spriteBatch.End(); } This gives me this output If I comment the spritebatch lines out I get the correct output, except the silverlight text is of course not shown I am not entirely sure what to look for except that zero vector I am giving to the spritebatch, but if thats the source I have no idea what I am supposed to set it as epspecially when its a 2d vector

    Read the article

  • BPM 11gR1 now available on Amazon EC2

    - by Prasen Palvankar
    BPM 11gR1 now available on Amazon EC2The new Oracle BPM 11gR1, including the latest Oracle SOA Suite 11gR1 Patchset-2 is now available as an Amazon Machine Image (AMI). This is a fully configured image which requires absolutely no installation and lets you get hands on experience with the software within minutes. This image has all the required software installed and configured and includes the following:Oracle 11g Database Standard Edition Oracle SOA Suite 11gR1 Patch-set 2Oracle BPM 11gR1Oracle Webcenter with BPM Process SpacesOracle Universal Content ManagementOracle JDeveloper with SOA and BPM pluginsNote: Use of this AMI requires acceptance of Oracle Technology Network (OTN) terms of use.To use this AMI, follow these steps: Login to your Amazon account and browse to Amazon AWS Console. If this is the first time you are using Amazon Web Services please visit https://aws.amazon.com/ec2/ for information on Amazon Elastic Cloud Computing and how to get started with Amazon EC2Make sure your security group that you will be using to launch the instance allows the following ports to be opened:22 (SSH)1521, 7001, 8001, 8888, 9001Click on AMIsChange the Viewing filters to 64-bit and enter soa-bpm in the search box. You should see the following AMI:083342568607/oracle-soa-bpm-11gr1-ps2-4.1-pubSelect the AMI and click on Launch or Spot Request. For more information on spot requests, please visit the Amazon EC2 link aboveAccept all the defaults and launch the instanceWhen the instance state changes to running, copy the assigned public host name and connect to it using either PuTTY or SSH command. For PuTTY usage, refer to this document.Once you are connected to the instance using PuTTY or SSH, you will be presented with the terms of use.Accept the terms of use to proceed. This will prompt you to set passwords for your oracle OS login as well as for VNC. Note that the instance will not be usable until you have accepted the terms of use.The instance is now ready to use. The SOA/BPM and other servers are automatically started once you accept the term of use. Initial startups can take about 5-10 minutes.If you would like to use the JDeveloper installed in the AMI, you can access it either using VNC or NX. You can get the NX client from NoMachine./home/oracle/README.txt contains all the URLs that you can use to access the Enterprise Manager, BPM Composer, BPM Workspace, Webcenter etc.

    Read the article

  • Google Analytics checkout page tracking problem

    - by Amir E. Habib
    I am running a multilingual website, each lang on a different domain name. I am trying to lead all purchase requests to the checkout progress, which has its own domain too. In order to keep Google Analytics tracking I've updated the Google Analytics code accordingly. I set the source domain to 'multiple top-level domains'. Everything is going fine so far unless in E-commerce Overview; the "Sources / Medium" is always showing as (direct) - or the name of the source domain. Since I am redirecting using PHP header(location:.. etc.) the Google _link method doesn't seem to be working properly - I want to focus on two questions: Should I create a new profile for the checkout domain in Google Analytics? (I am now using the profile ID of the source domain even though I move to the checkout domain, si that OK?) When I'm trying to pass the cookies of the source domain to the checkout domain, I notice that the Google cookies are copied to the new domain (the cookie path is .checkout-domain/) and they have the same values of the original cookies - But for some reason another set of cookies is created once I access a page with google analytics code in the checkout pages, with different values (same path). Feels like I'm doing something wrong here, so my question is - What am I doing wrong here? Does anyone have an idea how to pass the cookies to the checkout domain?

    Read the article

  • Unattended grub configuration after kernel upgrade

    - by bouke
    Today I have been working on automatic deployment of an ubuntu server. I got stuck on automatic updating of the server using apt-get upgrade trying to upgrade to a new kernel. The log looks like this: Setting up linux-image-3.2.0-24-generic (3.2.0-24.39) ... Testing for an existing GRUB menu.lst file ... found: /boot/grub/menu.lst (...) Then a question is presented: Package configuration +---------------------------------¦ +---------------------------------+ ¦ A new version of /boot/grub/menu.lst is available, but the version ¦ ¦ installed currently has been locally modified. ¦ ¦ ¦ ¦ What would you like to do about menu.lst? ¦ ¦ ¦ ¦ install the package maintainer's version ¦ ¦ keep the local version currently installed ¦ ¦ show the differences between the versions ¦ ¦ show a side-by-side difference between the versions ¦ ¦ show a 3-way difference between available versions ¦ ¦ do a 3-way merge between available versions (experimental) ¦ ¦ start a new shell to examine the situation ¦ ¦ ¦ ¦ ¦ ¦ <Ok> ¦ ¦ ¦ +----------------------------------------------------------------------+ The desired outcome would be to select the first option and to continue: Replacing config file /run/grub/menu.lst with new version Updating /boot/grub/menu.lst ... done After running the upgrade by hand, I used debconf-get-selections to inspect the correct answer for the question (see other settings). It seems like update_grub_changeprompt_threeway is the question that should be answered. However, setting this using debconf-set-selections presented me with the same question: debconf-set-selections <<< "grub grub/update_grub_changeprompt_threeway select install_new" apt-get -y dist-upgrade How can this question be automated?

    Read the article

  • How to reset display settings in XFCE \ Ubuntu 12.04 and also flgrx drivers

    - by Agent24
    I recently upgraded to Ubuntu 12.04 and since I hate unity I installed the Xubuntu package and am using XFCE instead. Since I have a Radeon HD5770 I also installed the fglrx drivers. This all went fine (aside from the fact that the post-release update fglrx drivers have an error on installation and Ubuntu thinks they're not installed when they actually are. I configured my display settings (dual monitors, a 17" CRT on VGA and a 17" LCD on DVI) in the amdcccle program and everything was perfect. THEN, 2 days ago, I accidentally clicked on the "Display" settings in XFCE "settings" manager. After that, everything got screwed. Now, I normally run the CRT at 1152x854 and the LCD at 1280x1024 with the CRT as my primary monitor (with panel) and the LCD without panels etc just to display other windows when I want to drag them over there. The problem is now that if I set my CRT to 1152x864, it stays at 1280x1024 virtually and half the stuff falls off the screen. It also puts the LCD at 1280x1024 BUT then overlays the CRT's display ontop with different wallpaper in an L shape down the right-hand and bottom edges. In short, nothing makes sense and everything is FUBAR. I tried uninstalling fglrx through synaptic, and renaming xorg.conf and also the xfce XML file that has monitor settings but it still won't make sense. Unity on the other hand can currently set everything normally so the problem appears to be only with XFCE. In any case, I can't even get the fglrx drivers back, when I re-installed them, I can't run amdccle anymore as it says the driver isn't installed!! Can someone help me reset my XFCE settings so the monitors aren't screwed with some incorrect virtual desktop size and also so I can get fglrx drivers back and working? I really don't want to have to format and reinstall and go through all the hassle but it looks like I may have to :(

    Read the article

  • Remove IP address from the URL of website using apache

    - by sapatos
    I'm on an EC2 instance and have a domain domain.com linked to the EC2 nameservers and it happily is serving my pages if I type domain.com in the URL. However when the page is served it resolves the url to: 1.1.1.10/directory/page.php. Using apache I've set up the following VirtualHost, following examples provided at http://httpd.apache.org/docs/2.0/dns-caveats.html Listen 80 NameVirtualHost 1.1.1.10:80 <VirtualHost 1.1.1.10:80> DocumentRoot /var/www/html/directory ServerName domain.com # Other directives here ... <FilesMatch "\.(ico|pdf|flv|jpg|jpeg|png|gif|js|css|swf)$"> Header set Cache-Control "max-age=290304000, public" </FilesMatch> </VirtualHost> However I'm not getting any changes to how the URL is displayed. This is the only VirtualHost configured on this site and I've confirmed its the one being used as I've managed to break it a number of times whilst experimenting with the configuration. The route53 entries I have are: domain.com A 1.1.1.10 domain.com NS ns-11.awsdns-11.com ns-111.awsdns-11.net ns-1111.awsdns-11.org ns-1111.awsdns-11.co.uk domain.com SOA ns-11.awsdns-11.com. awsdns-hostmaster.amazon.com. 1 1100 100 1101100 11100

    Read the article

  • Syntactic sugar in PHP with static functions

    - by Anna
    The dilemma I'm facing is: should I use static classes for the components of an application just to get nicer looking API? Example - the "normal" way: // example component class Cache{ abstract function get($k); abstract function set($k, $v); } class APCCache extends Cache{ ... } class application{ function __construct() $this->cache = new APCCache(); } function whatever(){ $this->cache->add('blabla'); print $this->cache->get('blablabla'); } } Notice how ugly is this->cache->.... But it gets waay uglier when you try to make the application extensible trough plugins, because then you have to pass the application instance to its plugins, and you get $this->application->cache->... With static functions: interface CacheAdapter{ abstract function get($k); abstract function set($k, $v); } class Cache{ public static $ad; public function setAdapter(CacheAdapter $a){ static::$ad = $ad; } public static function get($k){ return static::$ad->get($k); } ... } class APCCache implements CacheAdapter{ ... } class application{ function __construct(){ cache::setAdapter(new APCCache); } function whatever() cache::add('blabla', 5); print cache::get('blabla'); } } Here it looks nicer because you just call cache::get() everywhere. The disadvantage is that I loose the possibility to extend this class easily. But I've added a setAdapter method to make the class extensible to some point. I'm relying on the fact that I won't need to rewrite to replace the cache wrapper, ever, and that I won't need to run multiple application instances simultaneously (it's basically a site - and nobody works with two sites at the same time) So, am doing it wrong?

    Read the article

  • Copying files to zfs mountpoint doesn't work - the files aren't actually copied to the other filesystem,

    - by user113904
    I have 3 x 4 TB disks in a NAS that I want to group together and access as if they were one whole 'unit' of some kind. I also have a 250GB disk containing the OS - this is full of films and tv shows currently. I thought zfs sounded good so I created a raidz zpool, after installing the ppa sudo zpool create store raidz /dev/sdb /dev/sdc /dev/sdd and set the mountpoint to /mnt/store sudo zfs set mountpoint=/mnt/store /store checked it was successful - I think it was sudo zfs list NAME USED AVAIL REFER MOUNTPOINT store 266K 7.16T 170K /mnt/store Then I wanted to move over a whole load of files from my home directory. I went to where the to-be-copied folder was (called media) and entered sudo cp -R * /mnt/store cp: cannot create directory `/mnt/store/media': No space left on device It seems like it's not copying over to the new filesystem I made (or thought I did). I've never really done this type of thing until a few days ago so may be running before I can walk... is this not the right way to copy files across? I've only used windows before so the idea of mountpoints is a bit mind boggling. I'm using XBMCbuntu 12 beta 2.0 which is based on 12.04. Will retry with normal Ubuntu 12.04 desktop to see if that's the problem. thanks for the help!

    Read the article

  • Nginx or Apache for a VPS?

    - by James
    I consider myself to be an inexperienced user/administrator when it comes to running my VPS. I can get by with a few CLI commands, I can set up Webmin and I can set up Yum repos, but beyond the very basic stuff, I'm out of my depth. So far, I'm running Apache. I don't know it particularly well, but I can get by with editing httpd.conf if I'm told what to edit. I've heard good things about Nginx and that it's not as resource-hungry as Apache. I'd like to give it a go, but I can't find any information about its suitability for administrators like me, with little experience of sysadmin or web server config. Webmin now has support for Nginx, so getting it installed and running probably won't be too much of a problem. What I'm wondering is, from a site administrator perspective, is running Nginx as transparent as running Apache? IE, at the moment, I can just throw up Wordpress and Drupal sites without having much to worry about or having to make any config changes to Apache. Would Nginx be as transparent?

    Read the article

  • Use Enterprise Manager Cloud Control to monitor OBIEE 11.1.1.7.x Dashboards

    - by Torben Hein -Oracle
    (in via Senthil )  If your OBIEE 11.1.1.7.x is set up in the following way: The OBIEE repository is an Oracle Database and is set up as a data warehouse Usage tracking is enabled in OBIEE. ( For information on how to enable usage tracking in OBIEE, refer to the following link: Setting Up Usage Tracking in Oracle BI 11g ) The OBIEE instance is discovered in EM Cloud Control. ( For information on how to discover an OBIEE instance in Cloud Control, refer to the following link: Discovering Oracle Business Intelligence Instance and Oracle Essbase Targets ) The OBIEE repository is discovered in EM Cloud Control. ( For information on how to discover an Oracle database, refer to the following link: Discovering, Promoting, and Adding Database Targets ) then we've got news for you: KM Article:  OBIEE 11g: How To Diagnose Slowly Performing Dashboards using Enterprise Manager Cloud Control (Doc ID 1668236.1) takes you step by step through monitoring the SQL query performance behind your OBIEE dashboard. This Diagnostic approach ... .. will help you piece together information on BI dashboard performance, e.g. processing time from the different layers of the BI system including the repository. .. should enable you to get to the bottom of slow dashboards by using the wealth of information available in EM Cloud Control on OBIEE and Oracle DB. .. will NOT fix any performance issues on its own, but will help identify bottlenecks while processing dashboard requests. (layout and post: Torben, authorized: Lia)

    Read the article

  • DAO/Webservice Consumption in Web Application

    - by Gavin
    I am currently working on converting a "legacy" web-based (Coldfusion) application from single data source (MSSQL database) to multi-tier OOP. In my current system there is a read/write database with all the usual stuff and additional "read-only" databases that are exported daily/hourly from an Enterprise Resource Planning (ERP) system by SSIS jobs with business product/item and manufacturing/SCM planning data. The reason I have the opportunity and need to convert to multi-tier OOP is a newer more modern ERP system is being implemented business wide that will be a complete replacement. This newer ERP system offers several interfaces for third party applications like mine, from direct SQL access to either a dotNet web-service or a SOAP-like web-service. I have found several suitable frameworks I would be happy to use (Coldspring, FW/1) but I am not sure what design patterns apply to my data access object/component and how to manage the connection/session tokens, with this background, my question has the following three parts: Firstly I have concerns with moving from the relative safety of a SSIS job that protects me from downtime and speed of the ERP system to directly connecting with one of the web services which I note seem significantly slower than I expected (simple/small requests often take up to a whole second). Are there any design patterns I can investigate/use to cache/protect my data tier? It is my understanding data access objects (the component that connects directly with the web services and convert them into the data types I can then work with in my Domain Objects) should be singletons (and will act as an Adapter/Facade), am I correct? As part of the data access object I have to setup a connection by username/password (I could set up multiple users and/or connect multiple times with this) which responds with a session token that needs to be provided on every subsequent request. Do I do this once and share it across the whole application, do I setup a new "connection" for every user of my application and keep the token in their session scope (might quickly hit licensing limits), do I set the "connection" up per page request, or is there a design pattern I am missing that can manage multiple "connections" where a requests/access uses the first free "connection"? It is worth noting if the ERP system dies I will need to reset/invalidate all the connections and start from scratch, and depending on which web-service I use might need manually close the "connection/session"

    Read the article

  • Apache HTTPS ProxyPass certificate location

    - by oz1cz
    I'm trying to set up an Apache server that uses ProxyPass to pass HTTPS requests on to another server. Let's call the proxy server ALPHA and the target server BETA. ALPHA does not run HTTPS, but BETA does. I first tried using this virtual host specification on ALPHA: <VirtualHost *:443> ServerName mysite.com ProxyPass / https://192.168.1.105/ # BETA's IP address ProxyPassReverse / https://192.168.1.105/ # BETA's IP address ProxyPreserveHost On ProxyTimeout 600 SSLProxyEngine On RequestHeader set Front-End-Https "On" CacheDisable * </VirtualHost> But when I tried this, Apache complained saying, "[error] Server should be SSL-aware but has no certificate configured [Hint: SSLCertificateFile]". I had to copy the SSL certificate from BETA to ALPHA and add these lines to the host specification on ALPHA: SSLEngine on SSLCertificateKeyFile /usr/local/ssl/private/BETA_private.key SSLCertificateFile /usr/local/ssl/crt/BETA_public.crt SSLCertificateChainFile /usr/local/ssl/crt/BETA_intermediate.crt Now the system works. But I have a feeling that I have done something wrong or unnecessary. I have the web site's private key and certificate lying on both ALPHA and BETA. Is that necessary? Should I have done it differently?

    Read the article

  • How do I boot into console mode (redux)

    - by Leo Simon
    I'm running Ubuntu 12.04. This question was asked some time ago How do I disable the boot splash screen? but the answers didn't work for me. The standard way to boot into console mode used to be to edit /etc/default/grub and set GRUB_CMDLINE_LINUX_DEFAULT="text" This worked fine until I ran the fix proposed in https://help.ubuntu.com/community/SoundTroubleshootingProcedure in order to get sound to work. Since then, I have disabled the boot-splash-screen, but I can avoid what I presume is the lightdm login prompt screen. All I want to do is disable this gui and be prompted with a console login prompt. (Shouldnt be so hard should it???) I read in three 33416 mentioned above that there was a bug in lightdm (it wasn't recognizing "text" properly as an option for GRUB_CMDLINE_LINUX_DEFAULT.) But this discussion happened more than a year ago, and it's surely been fixed. Yet my lightdm is uptodate (so I'm told when I try to update it with apt-get). As suggested in one of the above, I tried sudo update-rc.d -f lightdm remove which resulted in a hung machine. I managed to recover using recovery mode, but now I still get the gui again. Another suggestion is to edit /etc/init/lightdm.override. I've done this and set it to "manual" as suggested, but lightdm simply ignores this. Could somebody suggest how to proceed please? Thanks very much, Leo

    Read the article

  • ZenGallery: a minimalist image gallery for Orchard

    - by Bertrand Le Roy
    There are quite a few image gallery modules for Orchard but they were not invented here I wanted something a lot less sophisticated that would be as barebones and minimalist as possible out of the box, to make customization extremely easy. So I made this, in less than two days (during which I got distracted a lot). Nwazet.ZenGallery uses existing Orchard features as much as it can: Galleries are just a content part that can be added to any type The set of photos in a gallery is simply defined by a folder in Media Managing the images in a gallery is done using the standard media management from Orchard Ordering of photos is simply alphabetical order of the filenames (use 1_, 2_, etc. prefixes if you have to) The path to the gallery folder is mapped from the content item using a token-based pattern The pattern can be set per content type You can edit the generated gallery path for each item The default template is just a list of links over images, that get open in a new tab No lightbox script comes with the module, just customize the template to use your favorite script. Light, light, light. Rather than explaining in more details this very simple module, here is a video that shows how I used the module to add photo galleries to a product catalog: Adding a gallery to a product catalog You can find the module on the Orchard Gallery: https://gallery.orchardproject.net/List/Modules/Orchard.Module.Nwazet.ZenGallery/ The source code is available from BitBucket: https://bitbucket.org/bleroy/nwazet.zengallery

    Read the article

  • ASP.NET ViewState Tips and Tricks #1

    - by João Angelo
    In User Controls or Custom Controls DO NOT use ViewState to store non public properties. Persisting non public properties in ViewState results in loss of functionality if the Page hosting the controls has ViewState disabled since it can no longer reset values of non public properties on page load. Example: public class ExampleControl : WebControl { private const string PublicViewStateKey = "Example_Public"; private const string NonPublicViewStateKey = "Example_NonPublic"; // DO public int Public { get { object o = this.ViewState[PublicViewStateKey]; if (o == null) return default(int); return (int)o; } set { this.ViewState[PublicViewStateKey] = value; } } // DO NOT private int NonPublic { get { object o = this.ViewState[NonPublicViewStateKey]; if (o == null) return default(int); return (int)o; } set { this.ViewState[NonPublicViewStateKey] = value; } } } // Page with ViewState disabled public partial class ExamplePage : Page { protected override void OnLoad(EventArgs e) { base.OnLoad(e); this.Example.Public = 10; // Restore Public value this.Example.NonPublic = 20; // Compile Error! } }

    Read the article

  • Textures quality issues with Libgdx

    - by user1876708
    I have drawn several vector objets and characters ( in Adobe Illustrator ) for my game on Android. They are all scalable at any size without any quality losses ( of course it's vector ^^ ). I have tried to simulate my gameboard directly on Illustrator just before setting my assests on libdgx to implement them in my game. I set all the objects at the good size, so that they fit perfectly on my XHDPI device I am running my test on. As you can see it works great ( for me at least ^^ ), the PNG quality is good for me, as expected ! So I have edited all my PNG at this size, set my assets on libgdx and build my game apk. And here is a screenshot of my gameboard ( don't pay attention at the differences of placing and objects, but check at the objets presents on both screenshot ). As you can see, I have a loss of my PNG quality in the game. It can be seen clealry on the hedgehog PNG, but also ( but not as obvious ) on the mushroom ( check at the outline ) and the hole PNG. If you really pay attention, on every objects, you can see pixels that are not visible on my first screenshot. And I just can't figure out why this is happening Oo If you have any ideas, you are very welcome ! Thanks. PS : You can check more clearly the 2 gameboard on this two links ( look at them at 100%, display at high resolution ) : Good quality link, from Illustrator Poor quality link, from the game Second phase of tests : We display an object ( the hedgehog ) on our main menu screen to see how it looks like. The things is that it looks like he is suppose to, which means, high quality with no pixels. The hedgehog PNG is coming from an atlas : layer.addActor(hedgehog); No loss of quality with this method So we think the problem is comming from the method we are using to display it on our gameboard : blocks[9][3] = new Block(TextureUtils.hedgehog, new Vector2(9, 3)); the block is getting the size from the vector we are associating to it, but we have a loss of quality with this method.

    Read the article

< Previous Page | 602 603 604 605 606 607 608 609 610 611 612 613  | Next Page >