Search Results

Search found 19949 results on 798 pages for 'print css'.

Page 747/798 | < Previous Page | 743 744 745 746 747 748 749 750 751 752 753 754  | Next Page >

  • nginx+django serving static files

    - by avalore
    I have followed instruction for setting up django with nginx from the django wiki (https://code.djangoproject.com/wiki/DjangoAndNginx) and have nginx setup as follows (a few name changes to fit my setup). user nginx nginx; worker_processes 2; error_log /var/log/nginx/error_log info; events { worker_connections 1024; use epoll; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] ' '"$request" $status $bytes_sent ' '"$http_referer" "$http_user_agent" ' '"$gzip_ratio"'; client_header_timeout 10m; client_body_timeout 10m; send_timeout 10m; connection_pool_size 256; client_header_buffer_size 1k; large_client_header_buffers 4 2k; request_pool_size 4k; gzip on; gzip_min_length 1100; gzip_buffers 4 8k; gzip_types text/plain; output_buffers 1 32k; postpone_output 1460; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 75 20; ignore_invalid_headers on; index index.html; server { listen 80; server_name localhost; location /static/ { root /srv/static/; } location ~* ^.+\.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|js|mov) { access_log off; expires 30d; } location / { # host and port to fastcgi server fastcgi_pass 127.0.0.1:8080; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param QUERY_STRING $query_string; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_pass_header Authorization; fastcgi_intercept_errors off; fastcgi_param REMOTE_ADDR $remote_addr; } access_log /var/log/nginx/localhost.access_log main; error_log /var/log/nginx/localhost.error_log; } } Static files aren't being served (nginx 404). If I look in the access log it seems nginx is looking in /etc/nginx/html/static... rather than /srv/static/ as specified in the config. I've no clue why it's doing this, any help would be hugely appreciated.

    Read the article

  • Help needed setting up nginx to serve static files.

    - by Catalina
    Hi Guys, I'm trying to setup nginx to serve static files. Basically all I need is to have http://mydomain.com/site_media/ point to /var/django/myproject/site_media. I have tried so many configurations and when I test it I always get a 404 error for static files. Can anyone please tell me what I'm doing wrong or how I should be setting this up? This is my current nginx configuration file. user www-data; worker_processes 1; #error_log /usr/local/nginx/logs/error.log; #pid /usr/local/nginx/logs/nginx.pid; events { worker_connections 1024; use epoll; } http { # Enumerate all the Tornado servers here upstream frontends { server 127.0.0.1:8000; server 127.0.0.1:8001; server 127.0.0.1:8002; server 127.0.0.1:8003; } include mime.types; default_type application/octet-stream; #access_log /usr/local/nginx/logs/access.log; keepalive_timeout 65; proxy_read_timeout 200; sendfile on; tcp_nopush on; tcp_nodelay on; gzip on; gzip_min_length 1000; gzip_proxied any; gzip_types text/plain text/html text/css text/xml application/x-javascript application/xml application/atom+xml text/javascript; proxy_next_upstream error; server { listen 80; # Allow file uploads client_max_body_size 50M; location ^~ /site_media/ { root /var/django/myproject/site_media; if ($query_string) { expires max; } } location = /favicon.ico { rewrite (.*) /site_media/favicon.ico; } location = /robots.txt { rewrite (.*) /site_media/robots.txt; } location / { proxy_pass_header Server; proxy_set_header Host $http_host; proxy_redirect off; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Scheme $scheme; proxy_pass http://frontends; } } #include /usr/local/nginx/sites-enabled/*; } Thanks, Cata

    Read the article

  • open source knowledge base CMS system

    - by Thomi
    I'm looking for an open source knowledge base system that uses tags, rather than free-text search to identify articles (a lot like serverfault does). I've looked at twiki, which many people suggested, but haven't found what I'm looking for. Basically I want to be able to create and tag articles, and provide an easy way for anonymous users to search based on tags. Edit: OK, here's some more detail regarding what I want. Basically, all the knowledge base systems I have seen so far are a collection of articles, each article with a title. Most of them allow you to categorise articles into groups and sub-groups. Users of the system can search for information using a title search, for example "How do I print from AwesomeProduct?" - which then shows a list of any articles that match that search text. This is fine and dandy when your KB is for one version of the software product (the mythical AwesomeProduct ver 1.0). However, the development team then go ahead and create a new version (ver 2.0) that adds many new features and changes some existing features. Now, how do we support both products in the same KB? The Naive method is to copy all articles from 1.0, and update them for 2.0, adding and removing articles in 2.0 as required. We can then add text at the top of every 1.0 article that says: "this articles applies to 1.0 only, to see the 2.0 version, click here" (or something similar) The problem with articles being indexed in the system by title is that it's very hard to filter based on meta-data like version. What happens when we create version 3.0 or 4.0? The end-situation here is that you have a mess of articles. They're hard to search, hard to filter, and even harder to manage. The solution (it seems to me) is to use tags, rather than text as the article index mechanism. So articles can be tagged with a tag representing the software version, topic area etc. etc. Users can then filter based on tag - an example search might be "version_1 printing" - which straight away gives a list of articles with all these tags. So that's what I'm looking for - a KB system that uses tags, rather than text to index many articles. I'm sure I could build something with drupal, but I was hoping for something that worked out-of-the-box.

    Read the article

  • Exchange-Server Query

    - by Rudi Kershaw
    First, a little background. I've recently been taken on as a web and software developer for a small company, who has no other in-house IT support. They've been asking my opinion on lots of IT subjects that are quite far out of my comfort zone. I'm definitely not a network admin. Their IT consultancy contractor is pushing them to upgrade their dedicated exchange server, even though it seems like the one they currently have has a lot of life left in it and is running problem free. They say it's "coming to the natural end of it's life". They want to install a monster with a Xeon E5-2420, 32GB RAM, 2x 1TB HDDs, Windows Server 2012 and Microsoft Exchange 2010. They want to charge a small fortune for it. Basically, this system seems massively over the top seeing as it won't be doing anything else other than running as an exchange server for a company with less than 25 email accounts. My employers also have a file server system in-house that hosts three web apps, an SQL server, their local domain, print server and shared folders. That machine is using the same specs as the proposed new one, and it is barely using any of it's potential. I asked if Microsoft Exchange 2010 could be installed on their file server, but they said that MS Exchange can't run on the same system as an SQL server because for some reason they will eat up each others resources (even though the SQL server isn't touching 1% of the current system's CPU or RAM). My question is really, are they trying to rip my employers off? Could MS Exchange be installed on their other server (on a virtual instance or not), or does the old one even need replacing at all? Going with their current suggestion will cost the company in excess of £6k, and it seems entirely unnecessary. I apologies, because I know this is probably a little thin on details, but if I carry on I could end up writing a massive essay that no-one will want to read. I've been doing my research, but I'm not knowledgeable enough make any hard decisions. Let me know if you need any more details. Thank you for any help you can offer. Further Details: The new exchange would need to support Outlook Web App, 25 users, a few public mailboxes, and email exchange with Blackberries.

    Read the article

  • nginx rewrite rule to convert URL segments to query string parameters

    - by Nick
    I'm setting up an nginx server for the first time, and having some trouble getting the rewrite rules right for nginx. The Apache rules we used were: See if it's a real file or directory, if so, serve it, then send all requests for / to Director.php DirectoryIndex Director.php If the URL has one segment, pass it as rt RewriteRule ^/([a-zA-Z0-9\-\_]+)/$ /Director.php?rt=$1 [L,QSA] If the URL has two segments, pass it as rt and action RewriteRule ^/([a-zA-Z0-9\-\_]+)/([a-zA-Z0-9\-\_]+)/$ /Director.php?rt=$1&action=$2 [L,QSA] My nginx config file looks like: server { ... location / { try_files $uri $uri/ /index.php; } location ~ \.php$ { fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } } How do I get the URL segments into Query String Parameters like in the Apache rules above? UPDATE 1 Trying Pothi's approach: # serve static files directly location ~* ^.+\.(jpg|jpeg|gif|css|png|js|ico|html)$ { access_log off; expires 30d; } location / { try_files $uri $uri/ /Director.php; rewrite "^/([a-zA-Z0-9\-\_]+)/$" "/Director.php?rt=$1" last; rewrite "^/([a-zA-Z0-9\-\_]+)/([a-zA-Z0-9\-\_]+)/$" "/Director.php?rt=$1&action=$2" last; } location ~ \.php$ { fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } This produces the output No input file specified. on every request. I'm not clear on if the .php location gets triggered (and subsequently passed to php) when a rewrite in any block indicates a .php file or not. UPDATE 2 I'm still confused on how to setup these location blocks and pass the parameters. location /([a-zA-Z0-9\-\_]+)/ { fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME ${document_root}Director.php?rt=$1{$args}; include fastcgi_params; } UPDATE 3 It looks like the root directive was missing, which caused the No input file specified. message. Now that this is fixed, I get the index file as if the URL were / on every request regardless of the number of URL segments. It appears that my location regular expression is being ignored. My current config is: # This location is ignored: location /([a-zA-Z0-9\-\_]+)/ { fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index Director.php; set $args $query_string&rt=$1; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } location / { try_files $uri $uri/ /Director.php; } location ~ \.php$ { fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index Director.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; }

    Read the article

  • NGINX rewrite rules help. Redirect not working and want to get rid of index.php in urls

    - by Tamerax
    hey! I have 2 questions for nginx users. 1) I'm trying to setup my joomla server onto my new linode running NGINX and after much (like days) of searching and testing, I finally have a config that works with with SEF url plugins...sorta. I was using an apache system on the old server and it used mod_rewrite and life was fine in terms of SEF. Since NGINX doesn't have mod_rewrite, I found something that works BUT it constantly leaves index.php in the urls. ex: http://mysite.com/index.php/forum i want it to be just http://mysite.com/forum but without mod_rewrite it doesn't seem to be possible in joomla that i'm aware of. I know in wordpress it IS possible but I have to use a plugin. Here is my config file: server { listen 80; server_name mysite.com www.mysite.com; access_log /home/public_html/mysite.com/log/access.log; error_log /home/public_html/mysite.com/log/error.log; root /home/public_html/mysite.com/public/; large_client_header_buffers 4 8k; # prevent some 400 errors index index.php index.html; fastcgi_index index.php; location / { expires 30d; error_page 404 = @joomla; log_not_found off; } # Rewrite location @joomla { rewrite ^(.*)$ /index.php?q=last; } # Static Files location ~* ^.+.(jpg|jpeg|gif|css|png|js|ico)$ { access_log off; expires 30d; } # PHP location ~ \.php { keepalive_timeout 0; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include /usr/local/nginx/conf/fastcgi_params; fastcgi_param SCRIPT_FILENAME /home/public_html/mysite.com/public /$fastcgi_script_name; } } 2) second question should be easy but i can't get it to work. I want to use the same config I posted above and have either mysite.com or www.mysite.com both forward to mysite.com/portal. Basically when you hit up the front page with or without the www, it all gets forwarded to a sub directory on the server I called Portal. I have tried several variations of: rewrite ^/(.*) http://www.example.com/portal/$1 permanent; but it usually ends with firefox telling me there is some crazy loop happening the address bar saying something like mysite.com/portalportalportalportalportal.........on and on. So, any help on either of these issues would be awesome!! Thanks!!

    Read the article

  • How to configure apache's mod_proxy_html to work as an ajax proxy?

    - by dcerecedo
    I'm trying to build a web site that let's you view and manipulate data from any page in any other website. To do that, I have to bypass 'Allow Origin' problems: i'm loading the other domain's content in an iframe and i have to manipulate its content with javascript downloaded from my domain. My first attempt was to write a simple proxy myself, requesting the other domains page through a server proxy coded in Java that not only serves the content but rebuilds links (src's and href's) in the content so that the content referenced by these links alse get downloaded through my handmade proxy. The result is not bad but has problems with url's in css and scripts. It's then that i realized that mod_proxy_html is supposed to do exactly all this job. The problem is that i cannot figure out how to make it work as expected. Let's suppose my server runs in my-domain.com and to proxy and transform content from another domain i'd make a request like this: my-domain.com/proxy?url=http://another-domain.com/some/content I'd want mod_proxy_html to serve the content and rewrite following URLs in http://another-domain.com/some/content in the following ways: Absolute URLs not from another-domain.com: no rewritting Relative from root urls:/other/content - /proxy?url=http://another-domain.com/other/content Relative urls: other/content - /proxy?url=http://another-domain.com/some/content/other/content Relative to parent urls: ../other/content - /proxy?url=http://another-domain.com/some/other/content The url should be specified at runtime, not configuration time. Can this be achieved with mod_proxy_html? Could anyone provide a simple working configuration to start with? EDIT 1-First approach The following site config will work fine with sites that use absolute url's everywhere like http://www.huffingtonpost.es/. Youc could try on this config on localhost: http://localhost/asset/http://www.huffingtonpost.es/ <VirtualHost *:80> ServerName localhost LogLevel debug ProxyRequests off RewriteEngine On RewriteRule ^/asset/(.*) $1 [P] ProxyHTMLURLMap $1 /asset/ <Location /asset/> ProxyPassReverse / ProxyHTMLURLMap / /asset/ </Location> </VirtualHost> But as explained in the documentation, if I hit a site using relative url's, I'd like to have these rewritten on the html via mod_proxy_html. So I shoud change the Location block as follows: <Location /asset/> ProxyPassReverse / #Depending on your system use one line or the other #Ubuntu: #SetOutputFilter proxy-html #any other system: ProxyHTMLEnable On ProxyHTMLURLMap / /asset/ </Location> ...which doesn't seem to work. Comments, hints and ideas welcome!

    Read the article

  • Why is Windows 7 not following all routes?

    - by GigabyteProductions
    My computer is connected to my secondary router that's running the 192.168.42.0/24 network and my computer also has a route that directs anything on that network to the router, but for anything on that network other than the router itself, it get's the ICMP response of Reply from 192.168.42.194: Destination host unreachable. (with 192.168.42.194 being my computer). Every other network works, like all of the internet, or addresses on my primary router like 192.168.1.*, just not on the 192.168.42.0/24 network... route print returns: IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 192.168.42.1 192.168.42.194 276 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 192.168.42.0 255.255.255.0 On-link 192.168.42.194 276 192.168.42.194 255.255.255.255 On-link 192.168.42.194 276 192.168.42.255 255.255.255.255 On-link 192.168.42.194 276 224.0.0.0 240.0.0.0 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 192.168.42.194 276 255.255.255.255 255.255.255.255 On-link 127.0.0.1 306 255.255.255.255 255.255.255.255 On-link 192.168.42.194 276 =========================================================================== Persistent Routes: Network Address Netmask Gateway Address Metric 0.0.0.0 0.0.0.0 192.168.42.1 Default =========================================================================== The only time anything is supposed to send an ICMP Host Unreachable response is when there's no route to it, right? So, why is my own computer sending that to ping or tracert when I have the route of 192.168.42.0 with the mask of 255.255.255.0? An IP address of 192.168.42.2 surely fits into that route. If I explicitly add a route for the IP address i am trying to access, it works, like: route add 192.168.42.2 mask 255.255.255.255 192.168.42.1 (the 192.168.42.1 right after mask is gateway, or the device to send the packet to so it can route it further), but why wont it work for the implicit route that's automatically on the table? I disabled my firewall, too (I use Comodo if anyone thinks this still serves as a problem). I'v even tried explicitly adding the gateway of 192.168.42.1 to the 192.168.42.0/24 route instead of it routing through 0.0.0.0's gateway, which is what On-link does. but that didn't work either, so it's not a gateway specification problem. If the host was really unreachable, it would be the router's IP address (192.168.42.1) sending that to me... This network is all of my creation, so there's no problem such as an administrator locking me out, because i am the administrator.

    Read the article

  • Create text file named after a cell containing other cell data

    - by user143041
    I tried using the code below for the Excel program on my `Mac Mini using the OS X Version 10.7.2 and it keeps saying Error due to file name / path: (The Excel file I am creating is going to be a template with my formulas and macros installed which will be used over and over). Sub CreateFile() Do While Not IsEmpty(ActiveCell.Offset(0, 1)) MyFile = ActiveCell.Value & ".txt" fnum = FreeFile() Open MyFile For Output As fnum Print #fnum, ActiveCell.Offset(0, 1) & " " & ActiveCell.Offset(0, 2) Close #fnum ActiveCell.Offset(1, 0).Select Loop End Sub What Im trying to do: 1st Objective I would like to have the following data to be used to create a text file. A:A is what I need the name of the file to be. B:2 is the content I need in the text file. So, A2 - "repair-video-game-Glassboro-NJ-08028.txt" is the file name and B2 to be the content in the file. Next, A3 is the file name and B3 is the content for the file, etc. ONCE the content reads what is in cell A16 and B16 (length will vary), the file creation should stop, if not then I can delete the additional files created. This sheet will never change. Is there a way to establish the excel macro to always go to this sheet instead of have to select it with the mouse to identify the starting point? 2nd Objective I would like to have the following data to be used to create a text file. A:1 is what I need the name of the file to be. B:B is the content I want in the file. So, A2 - is the file name "geo-sitemap.xml" and B:B to be the content in the file (ignore the .xml file extension in the photo). ONCE the content cell reads what is in cell "B16" (length will vary), the file creation should stop, if not then I can adjust the cells that have need content (formulated content you see in the image is preset for 500 rows). This sheet will never change. Is there a way to establish the excel macro to always go to this sheet instead of have to select it with the mouse to identify the starting point? I can Provide the content in the cells that are filled in by excel formulas that are not not to be included in the .txt files. It is ok if it is not possible. I can delete the extra cells that are not populated (based on the data sheet). Please let me know if you need any more additional information or clarity and I will be happy to provide it.

    Read the article

  • SQL SERVER – Beginning SQL Server: One Step at a Time – SQL Server Magazine

    - by pinaldave
    I am glad to announce that along with SQLAuthority.com, I will be blogging on the prominent site of SQL Server Magazine. My very first blog post there is already live; read here: Beginning SQL Server: One Step at a Time. My association with SQL Server Magazine has been quite long, I have written nearly 7 to 8 SQL Server articles for the print magazine and it has been a great experience. I used to stay in the United States at that time. I moved back to India for good, and during this process, I had put everything on hold for a while. Just like many things, “temporary” things become “permanent” – coming back to SQLMag was on hold for long time. Well, this New Year, things have changed – once again, I am back with my online presence at SQLMag.com. Everybody is a beginner at every task or activity at some point of his/her life: spelling words for the first time; learning how to drive for the first time, etc. No one is perfect at the start of any task, but every human is different. As time passes, we all develop our interests and begin to study our subject of interest. Most of us dream to get a job in the area of our study – however things change as time passes. I recently read somewhere online (I could not find the link again while writing this one) that all the successful people in various areas have never studied in the area in which they are successful. After going through a formal learning process of what we love, we refuse to stop learning, and we finally stop changing career and focus areas. We move, we dare and we progress. IT field is similar to our life. New IT professionals come to this field every day. There are two types of beginners – a) those who are associated with IT field but not familiar with other technologies, and b) those who are absolutely new to the IT field. Learning a new technology is always exciting and overwhelming for enthusiasts. I am working with database (in particular) for SQL Server for more than 7 years but I am still overwhelmed with so many things to learn. I continue to learn and I do not think that I should ever stop doing so. Just like everybody, I want to be in the race and get ahead in learning the technology. For the same, I am always looking for good guidance. I always try to find a good article, blog or book chapter, which can teach me what I really want to learn at this stage in my career and can be immensely helpful. Quite often, I prefer to read the material where the author does not judge me or assume my understanding. I like to read new concepts like a child, who takes his/her first steps of learning without any prior knowledge. Keeping my personal philosophy and preference in mind, I will be blogging on SQL Server Magazine site. I will be blogging on the beginners stuff. I will be blogging for them, who really want to start and make a mark in this area. I will be blogging for all those who have an extreme passion for learning. I am happy that this is a good start for this year. One of my resolutions is to help every beginner. It is totally possible that in future they all will grow and find the same article quite ‘easy‘ – well when that happens, it indicates the success of the article and material! Well, I encourage everybody to read my SQL Server Magazine blog – I will be blogging there frequently on various topics. To begin, we will be talking about performance tuning, and I assure that I will not shy away from other multiple areas. Read my SQL Server Magazine Blog: Beginning SQL Server: One Step at a Time I think the title says it all. Do leave your comments and feedback to indicate your preference of subject and interest. I am going to continue writing on subject, and the aim is of course to help grow in this field. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • Top 20 Daily Deal Sites In India

    - by Damodhar
    If you have never heard of Groupon recently, you probably are not working in the tech industry because it is all over the blogosphere. After all, growing from zero to US$1.35 billion valuation in 18 months is pretty AMAZING. Inspired by this, the following bunch of Groupon clone’s are already rising in India. Definitely this business model is emerging and changes the way online shopping happens in India. SnapDeal SnapDeal features a Best deals Coupons at an unbeatable price on the best stuff to do, see, eat, and buy in our city. It provides vouchers and discounts in all the major cities like Delhi, Mumbai, Chennai and Bangalore. KhojGuru Exclusive Discount coupons from hundreds of brands and retailers. These discounts can be easily downloaded as an SMS on to the mobile phone or their print out can be taken. MyDala A platform which gets us great deals in our city.Leveraging the “power of group buying”. Group buying happens when like minded people come together to get deals that we can never get on our own as individuals. SoSasta Great place which would not only tell us about the hidden treasures of our city — but also made them affordable to us at the end of the month. DealsAndYou Deals and You is a group buying portal that features a daily deal on the best stuff in some of India’s leading cities. AajKaCatch Its concept is to provide you the most unique, useful and qualitative product at a very low price. So you can now shop without the hassles of clustered products. BindassBargain Bindaas Bargain offers a new deal every day! Great stuff ranging from cool gadgets, home theatres, luxury watches, smash games. MasthiDeals It get you a great deal on a great stuff to do, eat, buy or see in your city. They have a team of about 25 wonderful people working in Chennai office working side by side with folks in MasthiDeal’s other cities. Koovs Founded by a team of IIT alumni who have brought in their expertise from the internet industry. Koovs is a Bangalore based start up and one point solution for all your desires. Taggle It brings you a variety of offers from some of the most respected brands in the country.This website uses collective buying to create a win-win for local businesses and their customers. BuzzInTown Buzzintown.com is a portal owned by Wortal Inc. There are a US headquartered company, with a presence pan-India through their India subsidiary, managed by a vastly experienced set of global leaders from the media, entertainment and technology industries. BuyThePrice It lines up the best win – win deals for both consumers and vendors and also ensures that each of the orders are dispatched in the shortest time possible. 24HoursLoot 24hoursLoot is an online store for selling a new t-shirt (sometime other products) everyday at deep discounted price in limited quantity/stock. DealMagic Customers get exposure to the best their city has to offer, at unbeatable prices (50-90% off).  We never feature more than one business on our website on any given day, so we have to be very very selective on who gets featured. Dealivore ICUMI Technologies Pvt Ltd is the company operating the Dealivore service. Founded in December 2009, ICUMI is privately owned and funded. LootMore An online store that exclusively focuses on selling cool quality stuff at cheap prices. Here you’ll always find the latest and greatest brands at prices you can afford. Foodome The deals features the best coupons at an unbeatable price on restaurants, fine dining on where to spend your birthday party.They provide coupon only in Chennai as of now. Top Online Shopping Sites- Nation Wide ebay.in eBay is The World’s Online Marketplace, enabling trade on a local, national and international basis. With a diverse and passionate community of individuals and small businesses, eBay offers an online platform where millions of items are traded each day. FutureBazzar Future Group, led by its founder and Group CEO, Mr. Kishore Biyani, is one of India’s leading business houses with multiple businesses spanning across the consumption space. TradeUs Launched in July 2009 and in a short span of time it has turned into one of India’s foremost shopping portals setting the Indian e-commerce abode aflame. BigShoeBazzar (BSB) is the largest online authorized shoe store in South Asia. Croma Promoted by Infiniti Retail Ltd, a 100% subsidiary of Tata Sons.One of the world’s leading retailers, ensuring that you buy nothing but the best. This article titled,Top 20 Daily Deal Sites In India, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • Install the Ajax Control Toolkit from NuGet

    - by Stephen Walther
    The Ajax Control Toolkit is now available from NuGet. This makes it super easy to add the latest version of the Ajax Control Toolkit to any Web Forms application. If you haven’t used NuGet yet, then you are missing out on a great tool which you can use with Visual Studio to add new features to an application. You can use NuGet with both ASP.NET MVC and ASP.NET Web Forms applications. NuGet is compatible with both Websites and Web Applications and it works with both C# and VB.NET applications. For example, I habitually use NuGet to add the latest version of ELMAH, Entity Framework, jQuery, jQuery UI, and jQuery Templates to applications that I create. To download NuGet, visit the NuGet website at: http://NuGet.org Imagine, for example, that you want to take advantage of the Ajax Control Toolkit RoundedCorners extender to create cross-browser compatible rounded corners in a Web Forms application. Follow these steps. Right click on your project in the Solution Explorer window and select the option Add Library Package Reference. In the Add Library Package Reference dialog, select the Online tab and enter AjaxControlToolkit in the search box: Click the Install button and the latest version of the Ajax Control Toolkit will be installed. Installing the Ajax Control Toolkit makes several modifications to your application. First, a reference to the Ajax Control Toolkit is added to your application. In a Web Application Project, you can see the new reference in the References folder: Installing the Ajax Control Toolkit NuGet package also updates your Web.config file. The tag prefix ajaxToolkit is registered so that you can easily use Ajax Control Toolkit controls within any page without adding a @Register directive to the page. <configuration> <system.web> <compilation debug="true" targetFramework="4.0" /> <pages> <controls> <add tagPrefix="ajaxToolkit" assembly="AjaxControlToolkit" namespace="AjaxControlToolkit" /> </controls> </pages> </system.web> </configuration> You should do a rebuild of your application by selecting the Visual Studio menu option Build, Rebuild Solution so that Visual Studio picks up on the new controls (You won’t get Intellisense for the Ajax Control Toolkit controls until you do a build). After you add the Ajax Control Toolkit to your application, you can start using any of the 40 Ajax Control Toolkit controls in your application (see http://www.asp.net/ajax/ajaxcontroltoolkit/samples/ for a reference for the controls). <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="WebForm1.aspx.cs" Inherits="WebApplication1.WebForm1" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title>Rounded Corners</title> <style type="text/css"> #pnl1 { background-color: gray; width: 200px; color:White; font: 14pt Verdana; } #pnl1_contents { padding: 10px; } </style> </head> <body> <form id="form1" runat="server"> <div> <asp:Panel ID="pnl1" runat="server"> <div id="pnl1_contents"> I have rounded corners! </div> </asp:Panel> <ajaxToolkit:ToolkitScriptManager ID="sm1" runat="server" /> <ajaxToolkit:RoundedCornersExtender TargetControlID="pnl1" runat="server" /> </div> </form> </body> </html> The page contains the following three controls: Panel – The Panel control named pnl1 contains the content which appears with rounded corners. ToolkitScriptManager – Every page which uses the Ajax Control Toolkit must contain a single ToolkitScriptManager. The ToolkitScriptManager loads all of the JavaScript files used by the Ajax Control Toolkit. RoundedCornersExtender – This Ajax Control Toolkit extender targets the Panel control. It makes the Panel control appear with rounded corners. You can control the “roundiness” of the corners by modifying the Radius property. Notice that you get Intellisense when typing the Ajax Control Toolkit tags. As soon as you type <ajaxToolkit, all of the available Ajax Control Toolkit controls appear: When you open the page in a browser, then the contents of the Panel appears with rounded corners. The advantage of using the RoundedCorners extender is that it is cross-browser compatible. It works great with Internet Explorer, Opera, Firefox, Chrome, and Safari even though different browsers implement rounded corners in different ways. The RoundedCorners extender even works with an ancient browser such as Internet Explorer 6. Getting the Latest Version of the Ajax Control Toolkit The Ajax Control Toolkit continues to evolve at a rapid pace. We are hard at work at fixing bugs and adding new features to the project. We plan to have a new release of the Ajax Control Toolkit each month. The easiest way to get the latest version of the Ajax Control Toolkit is to use NuGet. You can open the NuGet Add Library Package Reference dialog at any time to update the Ajax Control Toolkit to the latest version.

    Read the article

  • Parallelism in .NET – Part 13, Introducing the Task class

    - by Reed
    Once we’ve used a task-based decomposition to decompose a problem, we need a clean abstraction usable to implement the resulting decomposition.  Given that task decomposition is founded upon defining discrete tasks, .NET 4 has introduced a new API for dealing with task related issues, the aptly named Task class. The Task class is a wrapper for a delegate representing a single, discrete task within your decomposition.  We will go into various methods of construction for tasks later, but, when reduced to its fundamentals, an instance of a Task is nothing more than a wrapper around a delegate with some utility functionality added.  In order to fully understand the Task class within the new Task Parallel Library, it is important to realize that a task really is just a delegate – nothing more.  In particular, note that I never mentioned threading or parallelism in my description of a Task.  Although the Task class exists in the new System.Threading.Tasks namespace: Tasks are not directly related to threads or multithreading. Of course, Task instances will typically be used in our implementation of concurrency within an application, but the Task class itself does not provide the concurrency used.  The Task API supports using Tasks in an entirely single threaded, synchronous manner. Tasks are very much like standard delegates.  You can execute a task synchronously via Task.RunSynchronously(), or you can use Task.Start() to schedule a task to run, typically asynchronously.  This is very similar to using delegate.Invoke to execute a delegate synchronously, or using delegate.BeginInvoke to execute it asynchronously. The Task class adds some nice functionality on top of a standard delegate which improves usability in both synchronous and multithreaded environments. The first addition provided by Task is a means of handling cancellation via the new unified cancellation mechanism of .NET 4.  If the wrapped delegate within a Task raises an OperationCanceledException during it’s operation, which is typically generated via calling ThrowIfCancellationRequested on a CancellationToken, or if the CancellationToken used to construct a Task instance is flagged as canceled, the Task’s IsCanceled property will be set to true automatically.  This provides a clean way to determine whether a Task has been canceled, often without requiring specific exception handling. Tasks also provide a clean API which can be used for waiting on a task.  Although the Task class explicitly implements IAsyncResult, Tasks provide a nicer usage model than the traditional .NET Asynchronous Programming Model.  Instead of needing to track an IAsyncResult handle, you can just directly call Task.Wait() to block until a Task has completed.  Overloads exist for providing a timeout, a CancellationToken, or both to prevent waiting indefinitely.  In addition, the Task class provides static methods for waiting on multiple tasks – Task.WaitAll and Task.WaitAny, again with overloads providing time out options.  This provides a very simple, clean API for waiting on single or multiple tasks. Finally, Tasks provide a much nicer model for Exception handling.  If the delegate wrapped within a Task raises an exception, the exception will automatically get wrapped into an AggregateException and exposed via the Task.Exception property.  This exception is stored with the Task directly, and does not tear down the application.  Later, when Task.Wait() (or Task.WaitAll or Task.WaitAny) is called on this task, an AggregateException will be raised at that point if any of the tasks raised an exception.  For example, suppose we have the following code: Task taskOne = new Task( () => { throw new ApplicationException("Random Exception!"); }); Task taskTwo = new Task( () => { throw new ArgumentException("Different exception here"); }); // Start the tasks taskOne.Start(); taskTwo.Start(); try { Task.WaitAll(new[] { taskOne, taskTwo }); } catch (AggregateException e) { Console.WriteLine(e.InnerExceptions.Count); foreach (var inner in e.InnerExceptions) Console.WriteLine(inner.Message); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Here, our routine will print: 2 Different exception here Random Exception! Note that we had two separate tasks, each of which raised two distinctly different types of exceptions.  We can handle this cleanly, with very little code, in a much nicer manner than the Asynchronous Programming API.  We no longer need to handle TargetInvocationException or worry about implementing the Event-based Asynchronous Pattern properly by setting the AsyncCompletedEventArgs.Error property.  Instead, we just raise our exception as normal, and handle AggregateException in a single location in our calling code.

    Read the article

  • mysql not starting

    - by Eiriks
    I have a server running on rackspace.com, it been running for about a year (collecting data for a project) and no problems. Now it seems mysql froze (could not connect either through ssh command line, remote app (sequel pro) or web (pages using the db just froze). I got a bit eager to fix this quick and rebooted the virtual server, running ubuntu 10.10. It is a small virtual LAMP server (10gig storage - I'm only using 1, 256mb ram -has not been a problem). Now after the reboot, I cannot get mysql to start again. service mysql status mysql stop/waiting I believe this just means mysql is not running. How do I get this running again? service mysql start start: Job failed to start No. Just typing 'mysql' gives: mysql ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (111) There is a .sock file in this folder, 'ls -l' gives: srwxrwxrwx 1 mysql mysql 0 2012-12-01 17:20 mysqld.sock From googleing this for a while now, I see that many talk about the logfile and my.cnf. Logs Not sure witch ones I should look at. This log-file is empty: 'var/log/mysql/error.log', so is the 'var/log/mysql.err' and 'var/log/mysql.log'. my.cnf is located in '/etc/mysql' and looks like this. Can't see anything clearly wrong with it either. # # The MySQL database server configuration file. # # You can copy this to one of: # - "/etc/mysql/my.cnf" to set global options, # - "~/.my.cnf" to set user-specific options. # # One can use all long options that the program supports. # Run program with --help to get a list of available options and with # --print-defaults to see which it would actually understand and use. # # For explanations see # http://dev.mysql.com/doc/mysql/en/server-system-variables.html # This will be passed to all mysql clients # It has been reported that passwords should be enclosed with ticks/quotes # escpecially if they contain "#" chars... # Remember to edit /etc/mysql/debian.cnf when changing the socket location. [client] port = 3306 socket = /var/run/mysqld/mysqld.sock # Here is entries for some specific programs # The following values assume you have at least 32M ram # This was formally known as [safe_mysqld]. Both versions are currently parsed. [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] # # * Basic Settings # # # * IMPORTANT # If you make changes to these settings and your system uses apparmor, you may # also need to also adjust /etc/apparmor.d/usr.sbin.mysqld. # user = mysql socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp skip-external-locking # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. bind-address = 127.0.0.1 # # * Fine Tuning # key_buffer = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP #max_connections = 100 #table_cache = 64 #thread_concurrency = 10 # # * Query Cache Configuration # query_cache_limit = 1M query_cache_size = 16M # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. # As of 5.1 you can enable the log at runtime! #general_log_file = /var/log/mysql/mysql.log #general_log = 1 log_error = /var/log/mysql/error.log # Here you can see queries with especially long duration #log_slow_queries = /var/log/mysql/mysql-slow.log #long_query_time = 2 #log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. #server-id = 1 #log_bin = /var/log/mysql/mysql-bin.log expire_logs_days = 10 max_binlog_size = 100M #binlog_do_db = include_database_name #binlog_ignore_db = include_database_name # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 16M # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/ I need the data in the database (so i'd like to avoid reinstalling), and I need it back up running again. All hint, tips and solutions are welcomed and appreciated.

    Read the article

  • Linux-Containers — Part 1: Overview

    - by Lenz Grimmer
    "Containers" by Jean-Pierre Martineau (CC BY-NC-SA 2.0). Linux Containers (LXC) provide a means to isolate individual services or applications as well as of a complete Linux operating system from other services running on the same host. To accomplish this, each container gets its own directory structure, network devices, IP addresses and process table. The processes running in other containers or the host system are not visible from inside a container. Additionally, Linux Containers allow for fine granular control of resources like RAM, CPU or disk I/O. Generally speaking, Linux Containers use a completely different approach than "classicial" virtualization technologies like KVM or Xen (on which Oracle VM Server for x86 is based on). An application running inside a container will be executed directly on the operating system kernel of the host system, shielded from all other running processes in a sandbox-like environment. This allows a very direct and fair distribution of CPU and I/O-resources. Linux containers can offer the best possible performance and several possibilities for managing and sharing the resources available. Similar to Containers (or Zones) on Oracle Solaris or FreeBSD jails, the same kernel version runs on the host as well as in the containers; it is not possible to run different Linux kernel versions or other operating systems like Microsoft Windows or Oracle Solaris for x86 inside a container. However, it is possible to run different Linux distribution versions (e.g. Fedora Linux in a container on top of an Oracle Linux host), provided it supports the version of the Linux kernel that runs on the host. This approach has one caveat, though - if any of the containers causes a kernel crash, it will bring down all other containers (and the host system) as well. For example, Oracle's Unbreakable Enterprise Kernel Release 2 (2.6.39) is supported for both Oracle Linux 5 and 6. This makes it possible to run Oracle Linux 5 and 6 container instances on top of an Oracle Linux 6 system. Since Linux Containers are fully implemented on the OS level (the Linux kernel), they can be easily combined with other virtualization technologies. It's certainly possible to set up Linux containers within a virtualized Linux instance that runs inside Oracle VM Server for Oracle VM Virtualbox. Some use cases for Linux Containers include: Consolidation of multiple separate Linux systems on one server: instances of Linux systems that are not performance-critical or only see sporadic use (e.g. a fax or print server or intranet services) do not necessarily need a dedicated server for their operations. These can easily be consolidated to run inside containers on a single server, to preserve energy and rack space. Running multiple instances of an application in parallel, e.g. for different users or customers. Each user receives his "own" application instance, with a defined level of service/performance. This prevents that one user's application could hog the entire system and ensures, that each user only has access to his own data set. It also helps to save main memory — if multiple instances of a same process are running, the Linux kernel can share memory pages that are identical and unchanged across all application instances. This also applies to shared libraries that applications may use, they are generally held in memory once and mapped to multiple processes. Quickly creating sandbox environments for development and testing purposes: containers that have been created and configured once can be archived as templates and can be duplicated (cloned) instantly on demand. After finishing the activity, the clone can safely be discarded. This allows to provide repeatable software builds and test environments, because the system will always be reset to its initial state for each run. Linux Containers also boot significantly faster than "classic" virtual machines, which can save a lot of time when running frequent build or test runs on applications. Safe execution of an individual application: if an application running inside a container has been compromised because of a security vulnerability, the host system and other containers remain unaffected. The potential damage can be minimized, analyzed and resolved directly from the host system. Note: Linux Containers on Oracle Linux 6 with the Unbreakable Enterprise Kernel Release 2 (2.6.39) are still marked as Technology Preview - their use is only recommended for testing and evaluation purposes. The Open-Source project "Linux Containers" (LXC) is driving the development of the technology behind this, which is based on the "Control Groups" (CGroups) and "Name Spaces" functionality of the Linux kernel. Oracle is actively involved in the Linux Containers development and contributes patches to the upstream LXC code base. Control Groups provide means to manage and monitor the allocation of resources for individual processes or process groups. Among other things, you can restrict the maximum amount of memory, CPU cycles as well as the disk and network throughput (in MB/s or IOP/s) that are available for an application. Name Spaces help to isolate process groups from each other, e.g. the visibility of other running processes or the exclusive access to a network device. It's also possible to restrict a process group's access and visibility of the entire file system hierarchy (similar to a classic "chroot" environment). CGroups and Name Spaces provide the foundation on which Linux containers are based on, but they can actually be used independently as well. A more detailed description of how Linux Containers can be created and managed on Oracle Linux will be explained in the second part of this article. Additional links related to Linux Containers: OTN Article: The Role of Oracle Solaris Zones and Linux Containers in a Virtualization Strategy Linux Containers on Wikipedia - Lenz Grimmer Follow me on: Personal Blog | Facebook | Twitter | Linux Blog |

    Read the article

  • SQLAuthority News – Don’t Be Afraid To Fool The World – Video by John Sonmez

    - by Pinal Dave
    Sometime some words and statements grabs your attention and it is hard to stop thinking about that after a while. Something similar happened a few days ago when I read the twitter statement of my friend and Pluralsight author John Sonmez. He twitted few days ago very interesting statement. “I don’t know a single successful person, who doesn’t deep down think that have the world fooled. #fooltheworld” by John Sonmez. When I read it, I was extremely intrigued by this statement. I read it many times, I shared with my family and I just could not stop interpreting this statement. It was indeed fun to read it again and again and there are so many different meanings one can take away from the statement. I know John very well, he is a  wonderful person and have very positive energy for the life. I just had to request him to build a video around it. Right after 5 days of my request, John created a wonderful video around this subject. I watched it multiple times as it was a wonderful video. I am not going to write about what was in the video much as I suggest you to watch the video itself. Here is one of the personal stories I want to share which is absolutely relevant to this video. I think my story 100% resonant the story of John. A Real Story from My Past Three years ago, I submitted a session in one of the SharePoint conference as a SQL Server session. My session was accepted and I prepared it very well. I put more than 2 month’s time to prepare for the session and I was very excited to present the session. I reached to the event place traveling thousands of the miles and I was very much excited to present the session. However, there was a little mixed up in the session. There were multiple session which were similar to my session title. One of the other speakers also had proposed a database related session and was selected. When the material went to print the printing team got confused and by mistake swapped the sessions. The other speaker got Performance with SQL Server session and I had received Performance with SharePoint session. IT was indeed a big mixed up but now that is how it was in the event guide and it was marketed the same way everything in the event. A Big Mix Up I had to talk with the event organizer and we come to the conclusion that we all had good intention but things just got mixed up and now was the time when “The show must go on“. I had a great amount of hesitation to go and present the session as I had personally never worked with Sharepoint so close in my life and my session abstracted talked about SharePoint tricks in depth. Two hours before the session I took the help of one of my friend and installed the SharePoint on my box. He showed me a few things here and there but it was never a good enough time to learn everything which I wanted to learn. The Moments of Confidence I was very scared and nervous to go on the stage as a SharePoint was not something I felt comfortable. However, I decided to go on stage with confidence as a SharePoint expert. Though I did not know SharePoint at the best, I had confidence that whatever I know is correct and I will not misguide people. I had no intention to fool people but I had no intention to accept that I am a fool and you all wasted your time and money to dedicate your time to attend my session. I decided to be honest but at the same time decided to take the session beyond my expertise. The sixty minutes of the session went very fine and I was able to manage all the difficult question at a satisfactory level. When the session was over my feeling was that I would have not presented or talked any different if I had more knowledge of the SharePoint at that time. I think it was one of my best sessions and it was reflected in the session feedback as well. I was the best speaker across all the track and my session had highest ranking. I was delighted and I learned a very valuable lesson. I must go beyond my limits and knowledge. I must aim higher and work harder. I should not lie but I should have confidence that I have a good heart and I put 100% in my efforts.  Lessions Learned Since this incident I have learned a lot about SharePoint and I am now a regular speaker at various SharePoint conferences along with SQL Server sessions. I am motivated and I am not afraid. I know people have lots of expectation from me but I have learned not to judge myself before I do my best. I leave the judgement of my efforts to my audience. I do not take the burden of the feedback on me, even though I know my audience have expected from me. I know what I know and I put my best. I must go out, if I fail, I learn from my mistake but I must keep my progress trajectory very high. As John said in the video, sometime success is not something we can achieve 100% but we can keep on going near to it. As long as we do not lose our focus from our goal and do not deviate from our progress path, we are doing things right. Reference: Pinal Dave (http://blog.sqlauthority.com)  Filed under: About Me, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • ODI SDK: Retrieving Information From the Logs

    - by Christophe Dupupet
    It is fairly common to want to retrieve data from the ODI logs: statistics, execution status, even the generated code can be retrieved from the logs. The ODI SDK provides a robust set of APIs to parse the repository and retreve such information. To locate the information you are looking for, you have to keep in mind the structure of the logs: sessions contain steps; steps containt tasks. The session is the execution unit: basically, each time you execute something (interface, package, procedure, scenario) you create a new session. The steps are the individual entries found in a session: these will be the icons in your package for instance. Or if you are running an interface, you will have one single step: the interface itself. The tasks will represent the more atomic elements of the steps: the individual DDL, DML, scripts and so forth that are generated by ODI, along with all the detailed statistics for that task. All these details can be retrieved with the SDK. Because I had a question recently on the API ODIStepReport, I focus explicitly in this code on Scenario logs, but a lot more can be done with these APIs. Here is the code sample (you can just cut and paste that code in your ODI 11.1.1.6 Groovy console). Just save, adapt the code to your environment (in particular to connect to your repository) and hit "run" //Created by ODI Studioimport oracle.odi.core.OdiInstanceimport oracle.odi.core.config.OdiInstanceConfigimport oracle.odi.core.config.MasterRepositoryDbInfo import oracle.odi.core.config.WorkRepositoryDbInfo import oracle.odi.core.security.Authentication  import oracle.odi.core.config.PoolingAttributes import oracle.odi.domain.runtime.scenario.finder.IOdiScenarioFinder import oracle.odi.domain.runtime.scenario.OdiScenario import java.util.Collection import java.io.* /* ----------------------------------------------------------------------------------------- Simple sample code to list all executions of the last version of a scenario,along with detailed steps information----------------------------------------------------------------------------------------- */ /* update the following parameters to match your environment => */def url = "jdbc:oracle:thin:@myserver:1521:orcl"def driver = "oracle.jdbc.OracleDriver"def schema = "ODIM1116"def schemapwd = "ODIM1116PWD"def workrep = "WORKREP1116"def odiuser= "SUPERVISOR"def odiuserpwd = "SUNOPSIS" // Rather than hardcoding the project code and folder name, // a great improvement here would be to parse the entire repository def scenario_name = "LOAD_DWH" /*Scenario Name*/ /* <=End of the update section */ //--------------------------------------//Connection to the repository// Note for ODI 11.1.1.6: you could use predefined odiInstance variable if you are // running the script from a Studio that is already connected to the repository def masterInfo = new MasterRepositoryDbInfo(url, driver, schema, schemapwd.toCharArray(), new PoolingAttributes())def workInfo = new WorkRepositoryDbInfo(workrep, new PoolingAttributes())def odiInstance = OdiInstance.createInstance(new OdiInstanceConfig(masterInfo, workInfo)) //--------------------------------------// In all cases, we need to make sure we have authorized access to the repositorydef auth = odiInstance.getSecurityManager().createAuthentication(odiuser, odiuserpwd.toCharArray())odiInstance.getSecurityManager().setCurrentThreadAuthentication(auth) //--------------------------------------// Retrieve the scenario we are looking fordef odiScenario = ((IOdiScenarioFinder)odiInstance.getTransactionalEntityManager().getFinder(OdiScenario.class)).findLatestByName(scenario_name) if (odiScenario == null){    println("Error: cannot find scenario "+scenario_name);    return} //--------------------------------------// Retrieve all reports for the scenario def OdiScenarioReportsList = odiScenario.getScenarioReports() println("*** Listing all reports for Scenario \""+scenario_name+"\" ") //--------------------------------------// For each report, print the folowing:// - start time// - duration// - status// - step reports: selection of details for (s in OdiScenarioReportsList){        println("\tStart time: " + s.getSessionStartTime())        println("\tDuration: " + s.getSessionDuration())        println("\tStatus: " + s.getSessionStatus())                def OdiScenarioStepReportsList = s.getStepReports()        for (st in OdiScenarioStepReportsList){            println("\t\tStep Name: " + st.getStepName())            println("\t\tStep Resource Name: " + st.getStepResourceName())            println("\t\tStep Start time: " + st.getStepStartTime())            println("\t\tStep Duration: " + st.getStepDuration())            println("\t\tStep Status: " + st.getStepStatus())            println("\t\tStep # of inserts: " + st.getStepInsertCount())            println("\t\tStep # of updates: " + st.getStepUpdateCount()+'\n')      }      println("\t")}

    Read the article

  • Enabling Http caching and compression in IIS 7 for asp.net websites

    - by anil.kasalanati
    Caching – There are 2 ways to set Http caching 1-      Use Max age property 2-      Expires header. Doing the changes via IIS Console – 1.       Select the website for which you want to enable caching and then select Http Responses in the features tab       2.       Select the Expires webcontent and on changing the After setting you can generate the max age property for the cache control    3.       Following is the screenshot of the headers   Then you can use some tool like fiddler and see 302 response coming from the server. Doing it web.config way – We can add static content section in the system.webserver section <system.webServer>   <staticContent>             <clientCache cacheControlMode="UseMaxAge" cacheControlMaxAge="365.00:00:00" />   </staticContent> Compression - By default static compression is enabled on IIS 7.0 but the only thing which falls under that category is CSS but this is not enough for most of the websites using lots of javascript.  If you just thought by enabling dynamic compression would fix this then you are wrong so please follow following steps –   In some machines the dynamic compression is not enabled and following are the steps to enable it – Open server manager Roles > Web Server (IIS) Role Services (scroll down) > Add Role Services Add desired role (Web Server > Performance > Dynamic Content Compression) Next, Install, Wait…Done!   ?  Roles > Web Server (IIS) ?  Role Services (scroll down) > Add Role Services     Add desired role (Web Server > Performance > Dynamic Content Compression)     Next, Install, Wait…Done!     Enable  - ?  Open server manager ?  Roles > Web Server (IIS) > Internet Information Services (IIS) Manager   Next pane: Sites > Default Web Site > Your Web Site Main pane: IIS > Compression         Then comes the custom configuration for encrypting javascript resources. The problem is that the compression in IIS 7 completely works on the mime types and by default there is a mismatch in the mime types Go to following location C:\Windows\System32\inetsrv\config Open applicationHost.config The mimemap is as follows  <mimeMap fileExtension=".js" mimeType="application/javascript" />   So the section in the staticTypes should be changed          <add mimeType="application/javascript" enabled="true" />     Doing the web.config way –   We can add following section in the system.webserver section <system.webServer> <urlCompression doDynamicCompression="false"  doStaticCompression="true"/> More Information/References – ·         http://weblogs.asp.net/owscott/archive/2009/02/22/iis-7-compression-good-bad-how-much.aspx ·         http://www.west-wind.com/weblog/posts/98538.aspx  

    Read the article

  • Top 10 Tips & Tricks for Oracle SQL Developer

    - by thatjeffsmith
    Being a short week due to the holiday, and with everyone enjoying their Summer vacations (apologies Southern Hemispherians), I reckoned it was a great time to do one of those lazy recap-Top 10-Reader’s Digest type posts. I’ve been sharing 1-3 tips or ‘tricks’ a week since I started blogging about SQL Developer, and I have more than enough content to write a book. But since I’m lazy, I’m just going to compile a list of my favorite ‘must know’ tips instead. I always have to leave out a few tips when I do my presentations, so now I can refer back to this list to make sure I’m not forgetting anything. So without further ado… 1. Configure Your Preferences Yes, there are a LOT of options. But you don’t need to worry about all of them just yet. I do recommend you take a quick look at these ones in particular. Whether you’re new to the tool or have been using it for 5 years, don’t overlook these settings! 2. Disable Extensions You Aren’t Using If you’re not using Data Miner, or if you’re not working on a Migration – disable those extensions! SQL Developer will run leaner & meaner, plus the user interface will be a bit more simplified making the tool easier to navigate as well. 3. SQL Recall via Keyboard Access your history via the keyboard! Cycle through your recent SQL statements just using these magic key strokes! Ctrl+Up or Ctrl+Down. 4. Format Your Query Output Directly to CSV, XML, HTML, etc Have the query results pre-formatted in the format of your choice! Too lazy to run the Export wizard for your query result sets? Just add the SQL Developer output hints to your statement and have the output auto-magically formatted to the style of your choice! 5. Drag & Drop Multiple Tables to the Worksheet SQL Developer will auto-join the related objects. You can then toggle over to the Query Builder to toggle off the columns you don’t want to query. I guarantee this tip will save you time if you’re joining 3 or more tables! 6. Drag & Drop Multiple Tables to a Relational Model A pretty picture is worth a few dozen DDL scripts? SQL Developer does data modeling! If you ctrl-drag a table to a model, it will take that table and any related tables and reverse engineer them to a relational model! You can then print it out or export it to HTML, PDF, etc. 7. View Your PL/SQL Execution Output Automatically Function returns a refcursor? Procedure had 3 out parameters? When you run these programs via the Procedure Editor, we automatically capture the output and place them into one or more data grids for you to browse. 8. Disable Automatic Code Insight and Use It On-Demand Code Editor – Completion Insight – Enable Completion Auto-Popup (Keyword being Auto) Some folks really don’t like it when their IDEs or word-processors try to do ‘too much’ for them. Thankfully SQL Developer allows you to either increase the delay before it attempts to auto-complete your text OR to disable the automatic bit. Instead, you can invoke it on-demand. 9. Interactive Debugging – Change Your Variable Values as You Step Through Your PLSQL Watches aren’t just for watching. You can actually interact with your programs and ‘see what happens’ when X = 256 instead of 1. 10. Ditch the Tree View for the Schema Browser There’s nothing wrong with the Connection tree for browsing your database objects. But some folks just can’t seem to get comfortable with it. So, we built them a Schema Browser that uses a drop down control instead for changing up your schema and object types. Already Know This Stuff, Want More? Just check out my SQL Developer resource page, it’s one of the main links on the top of this page. Or if you can’t find something, just drop me a note in the form of a comment on this page and I’ll do my best to find it or write it for you.

    Read the article

  • Handy ASP.NET MVC 2 Extension Methods &ndash; Where am I?

    - by Bobby Diaz
    Have you ever needed to detect what part of the application is currently being viewed?  This might be a bigger issue if you write a lot of shared/partial views or custom display or editor templates.  Another scenario, which is the one I encountered when I first started down this path, is when you have some type of menu and you’d like to be able to determine which item represents the current page so you can highlight it in some way.  A simple example is the menu that is created as part of the default ASP.NET MVC 2 Application template.   <div id="menucontainer">       <ul id="menu">         <li><%= Html.ActionLink("Home", "Index", "Home") %></li>         <li><%= Html.ActionLink("About", "About", "Home") %></li>     </ul>   </div>   The part that got me at first, however, was the following entry in the default style sheet (Site.css):   ul#menu li.selected a {     background-color: #fff;     color: #000; }   I assumed that the .selected class would automatically get applied to the active menu item.  After trying a few different things, including the MvcContrib MenuBuilder, I decided to write my own extension methods so I would have more control over the output.  First, I needed a way to determine what view the user has navigated to based on the requested URL and route configuration.  Now, I am sure there are many ways to do this, but this is what I came up with:   public static class RequestExtensions {     public static bool IsCurrentRoute(this RequestContext context, String areaName,         String controllerName, params String[] actionNames)     {         var routeData = context.RouteData;         var routeArea = routeData.DataTokens["area"] as String;         var current = false;           if ( ((String.IsNullOrEmpty(routeArea) && String.IsNullOrEmpty(areaName)) ||               (routeArea == areaName)) &&              ((String.IsNullOrEmpty(controllerName)) ||               (routeData.GetRequiredString("controller") == controllerName)) &&              ((actionNames == null) ||                actionNames.Contains(routeData.GetRequiredString("action"))) )         {             current = true;         }           return current;     }       // additional overloads omitted... }   With that in place, I was able to write several UrlHelper methods that check if the supplied values map to the current view.   public static class UrlExtensions {     public static bool IsCurrent(this UrlHelper urlHelper, String areaName,         String controllerName, params String[] actionNames)     {         return urlHelper.RequestContext.IsCurrentRoute(areaName, controllerName, actionNames);     }       public static string Selected(this UrlHelper urlHelper, String areaName,         String controllerName, params String[] actionNames)     {         return urlHelper.IsCurrent(areaName, controllerName, actionNames)             ? "selected" : String.Empty;     }       // additional overloads omitted... }   Now I can re-work the original menu to utilize these new methods.  Note: be sure to import the proper namespace so the extension methods become available inside your views!   <div id="menucontainer">       <ul id="menu">         <li class="<%= Url.Selected(null, "Home", "Index") %>">             <%= Html.ActionLink("Home", "Index", "Home")%></li>           <li class="<%= Url.Selected(null, "Home", "About") %>">             <%= Html.ActionLink("About", "About", "Home")%></li>     </ul>   </div>   If we take it one step further, we can clean up the markup even more.  Check out the Html.ActionMenuItem() extension method and the refined menu:   public static class HtmlExtensions {     public static MvcHtmlString ActionMenuItem(this HtmlHelper htmlHelper, String linkText,         String actionName, String controllerName)     {         var html = new StringBuilder("<li");           if ( htmlHelper.ViewContext.RequestContext                 .IsCurrentRoute(null, controllerName, actionName) )         {             html.Append(" class=\"selected\"");         }           html.Append(">")             .Append(htmlHelper.ActionLink(linkText, actionName, controllerName))             .Append("</li>");           return MvcHtmlString.Create(html.ToString());     }       // additional overloads omitted... }   <div id="menucontainer">       <ul id="menu">         <%= Html.ActionMenuItem("Home", "Index", "Home") %>         <%= Html.ActionMenuItem("About", "About", "Home") %>     </ul>   </div>   Which generates the following HTML:   <div id="menucontainer">       <ul id="menu">         <li class="selected"><a href="/">Home</a></li>         <li><a href="/Home/About">About</a></li>     </ul>   </div>     I have created a codepaste of these extension methods if you are interested in using them in your own projects.  Enjoy!

    Read the article

  • Creating an ASP.NET report using Visual Studio 2010 - Part 2

    - by rajbk
    We continue building our report in this three part series. Creating an ASP.NET report using Visual Studio 2010 - Part 1 Creating an ASP.NET report using Visual Studio 2010 - Part 3 Creating the Client Report Definition file (RDLC) Add a folder called “RDLC”. This will hold our RDLC report.   Right click on the RDLC folder, select “Add new item..” and add an “RDLC” name of “Products”. We will use the “Report Wizard” to walk us through the steps of creating the RDLC.   In the next dialog, give the dataset a name called “ProductDataSet”. Change the data source to “NorthwindReports.DAL” and select “ProductRepository(GetProductsProjected)”. The fields that are returned from the method are shown on the right. Click next.   Drag and drop the ProductName, CategoryName, UnitPrice and Discontinued into the Values container. Note that you can create much more complex grouping using this UI. Click Next.   Most of the selections on this screen are grayed out because we did not choose a grouping in the previous screen. Click next. Choose a style for your report. Click next. The report graphic design surface is now visible. Right click on the report and add a page header and page footer. With the report design surface active, drag and drop a TextBox from the tool box to the page header. Drag one more textbox to the page header. We will use the text boxes to add some header text as shown in the next figure. You can change the font size and other properties of the textboxes using the formatting tool bar (marked in red). You can also resize the columns by moving your cursor in between columns and dragging. Adding Expressions Add two more text boxes to the page footer. We will use these to add the time the report was generated and page numbers. Right click on the first textbox in the page footer and select “Expression”. Add the following expression for the print date (note the = sign at the left of the expression in the dialog below) "© Northwind Traders " & Format(Now(),"MM/dd/yyyy hh:mm tt") Right click on the second text box and add the following for the page count.   Globals.PageNumber & " of " & Globals.TotalPages Formatting the page footer is complete.   We are now going to format the “Unit Price” column so it displays the number in currency format.  Right click on the [UnitPrice] column (not header) and select “Text Box Properties..” Under “Number”, select “Currency”. Hit OK. Adding a chart With the design surface active, go to the toolbox and drag and drop a chart control. You will need to move the product list table down first to make space for the chart contorl. The document can also be resized by dragging on the corner or at the page header/footer separator. In the next dialog, pick the first chart type. This can be changed later if needed. Click OK. The chart gets added to the design surface.   Click on the blue bars in the chart (not legend). This will bring up drop locations for dropping the fields. Drag and drop the UnitPrice and CategoryName into the top (y axis) and bottom (x axis) as shown below. This will give us the total unit prices for a given category. That is the best I could come up with as far as what report to render, sorry :-) Delete the legend area to get more screen estate. Resize the chart to your liking. Change the header, x axis and y axis text by double clicking on those areas. We made it this far. Let’s impress the client by adding a gradient to the bar graph :-) Right click on the blue bar and select “Series properties”. Under “Fill”, add a color and secondary color and select the Gradient style. We are done designing our report. In the next section you will see how to add the report to the report viewer control, bind to the data and make it refresh when the filter criteria are changed.   Creating an ASP.NET report using Visual Studio 2010 - Part 3

    Read the article

  • ODI and OBIEE 11g Integration

    - by David Allan
    Here we will see some of the connectivity options to OBIEE 11g using the JDBC driver. You’ll see based upon some connection properties how the physical or presentation layers can be utilized. In the integrators guide for OBIEE 11g you will find a brief statement indicating that there actually is a JDBC driver for OBIEE. In OBIEE 11g its now possible to connect directly to the physical layer, Venkat has an informative post here on this topic. In ODI 11g the Oracle BI technology is shipped with the product along with KMs for reverse engineering, and using OBIEE models for a data source. When you install OBIEE in 11g a light weight demonstration application is preinstalled in the server, when you open this in the BI Administration tool we see the regular 3 panel view within the administration tool. To interrogate this system via JDBC (just like ODI does using the KMs) need a couple of things; the JDBC driver from OBIEE 11g, a java client program and the credentials. In my java client program I want to connect to the OBIEE system, when I connect I can interrogate what the JDBC driver presents for the metadata. The metadata projected via the JDBC connection’s DatabaseMetadata changes depending on whether the property NQ_SESSION.SELECTPHYSICAL is set when the java client connects. Let’s use the sample app to illustrate. I have a java client program here that will print out the tables in the DatabaseMetadata, it will also output the catalog and schema. For example if I execute without any special JDBC properties as follows; java -classpath .;%BIHOMEDIR%\clients\bijdbc.jar meta_jdbc oracle.bi.jdbc.AnaJdbcDriver jdbc:oraclebi://localhost:9703/ weblogic mypass Then I get the following returned representing the presentation layer, the sample I used is XML, and has no schema; Catalog Schema Table Sample Sales Lite null Base Facts Sample Sales Lite null Calculated Facts …     Sample Targets Lite null Base Facts …     Now if I execute with the only difference being the JDBC property NQ_SESSION.SELECTPHYSICAL with the value Yes, then I see a different set of values representing the physical layer in OBIEE; java -classpath .;%BIHOMEDIR%\clients\bijdbc.jar meta_jdbc oracle.bi.jdbc.AnaJdbcDriver jdbc:oraclebi://localhost:9703/ weblogic mypass NQ_SESSION.SELECTPHYSICAL=Yes The following is returned; Catalog Schema Table Sample App Lite Data null D01 Time Day Grain Sample App Lite Data null F10 Revenue Facts (Order grain) …     System DB (Update me)     …     If this was a database system such as Oracle, the catalog value would be the OBIEE database name and the schema would be the Oracle database schema. Other systems which have real catalog structure such as SQLServer would use its catalog value. Its this ‘Catalog’ and ‘Schema’ value that is important when integration OBIEE with ODI. For the demonstration application in OBIEE 11g, the following illustration shows how the information from OBIEE is related via the JDBC driver through to ODI. In the XML example above, within ODI’s physical schema definition on the right, we leave the schema blank since the XML data source has no schema. When I did this at first, I left the default value that ODI places in the Schema field since which was ‘<Undefined>’ (like image below) but this string is actually used in the RKM so ended up not finding any tables in this schema! Entering an empty string resolved this. Below we see a regular Oracle database example that has the database, schema, physical table structure, and how this is defined in ODI.   Remember back to the physical versus presentation layer usage when we passed the special property, well to do this in ODI, the data server has a panel for properties where you can define key/value pairs. So if you want to select physical objects from the OBIEE server, then you must set this property. An additional changed in ODI 11g is the OBIEE connection pool support, this has been implemented via a ‘Connection Pool’ flex field for the Oracle BI data server. So here you set the connection pool name from the OBIEE system that you specifically want to use and this is used by the Oracle BI to Oracle (DBLINK) LKM, so if you are using this you must set this flex field. Hopefully a useful insight into some of the mechanics of how this hangs together.

    Read the article

  • RK4 Bouncing a Ball

    - by Jonathan Dickinson
    I am trying to wrap my head around RK4. I decided to do the most basic 'ball with gravity that bounces' simulation. I have implemented the following integrator given Glenn Fiedler's tutorial: /// <summary> /// Represents physics state. /// </summary> public struct State { // Also used internally as derivative. // S: Position // D: Velocity. /// <summary> /// Gets or sets the Position. /// </summary> public Vector2 X; // S: Position // D: Acceleration. /// <summary> /// Gets or sets the Velocity. /// </summary> public Vector2 V; } /// <summary> /// Calculates the force given the specified state. /// </summary> /// <param name="state">The state.</param> /// <param name="t">The time.</param> /// <param name="acceleration">The value that should be updated with the acceleration.</param> public delegate void EulerIntegrator(ref State state, float t, ref Vector2 acceleration); /// <summary> /// Represents the RK4 Integrator. /// </summary> public static class RK4 { private const float OneSixth = 1.0f / 6.0f; private static void Evaluate(EulerIntegrator integrator, ref State initial, float t, float dt, ref State derivative, ref State output) { var state = new State(); // These are a premature optimization. I like premature optimization. // So let's not concentrate on that. state.X.X = initial.X.X + derivative.X.X * dt; state.X.Y = initial.X.Y + derivative.X.Y * dt; state.V.X = initial.V.X + derivative.V.X * dt; state.V.Y = initial.V.Y + derivative.V.Y * dt; output = new State(); output.X.X = state.V.X; output.X.Y = state.V.Y; integrator(ref state, t + dt, ref output.V); } /// <summary> /// Performs RK4 integration over the specified state. /// </summary> /// <param name="eulerIntegrator">The euler integrator.</param> /// <param name="state">The state.</param> /// <param name="t">The t.</param> /// <param name="dt">The dt.</param> public static void Integrate(EulerIntegrator eulerIntegrator, ref State state, float t, float dt) { var a = new State(); var b = new State(); var c = new State(); var d = new State(); Evaluate(eulerIntegrator, ref state, t, 0.0f, ref a, ref a); Evaluate(eulerIntegrator, ref state, t + dt * 0.5f, dt * 0.5f, ref a, ref b); Evaluate(eulerIntegrator, ref state, t + dt * 0.5f, dt * 0.5f, ref b, ref c); Evaluate(eulerIntegrator, ref state, t + dt, dt, ref c, ref d); a.X.X = OneSixth * (a.X.X + 2.0f * (b.X.X + c.X.X) + d.X.X); a.X.Y = OneSixth * (a.X.Y + 2.0f * (b.X.Y + c.X.Y) + d.X.Y); a.V.X = OneSixth * (a.V.X + 2.0f * (b.V.X + c.V.X) + d.V.X); a.V.Y = OneSixth * (a.V.Y + 2.0f * (b.V.Y + c.V.Y) + d.V.Y); state.X.X = state.X.X + a.X.X * dt; state.X.Y = state.X.Y + a.X.Y * dt; state.V.X = state.V.X + a.V.X * dt; state.V.Y = state.V.Y + a.V.Y * dt; } } After reading over the tutorial I noticed a few things that just seemed 'out' to me. Notably how the entire simulation revolves around t at 0 and state at 0 - considering that we are working out a curve over the duration it seems logical that RK4 wouldn't be able to handle this simple scenario. Never-the-less I forged on and wrote a very simple Euler integrator: static void Integrator(ref State state, float t, ref Vector2 acceleration) { if (state.X.Y > 100 && state.V.Y > 0) { // Bounce vertically. acceleration.Y = -state.V.Y * t; } else { acceleration.Y = 9.8f; } } I then ran the code against a simple fixed-time step loop and this is what I got: 0.05 0.20 0.44 0.78 1.23 1.76 ... 74.53 78.40 82.37 86.44 90.60 94.86 99.23 103.05 105.45 106.94 107.86 108.42 108.76 108.96 109.08 109.15 109.19 109.21 109.23 109.23 109.24 109.24 109.24 109.24 109.24 109.24 109.24 109.24 109.24 109.24 109.24 109.24 109.24 109.24 ... As I said, I was expecting it to break - however I am unsure of how to fix it. I am currently looking into keeping the previous state and time, and working from that - although at the same time I assume that will defeat the purpose of RK4. How would I get this simulation to print the expected results?

    Read the article

  • Leaks on Wikis: "Corporations...You're Next!" Oracle Desktop Virtualization Can Help.

    - by adam.hawley
    Between all the press coverage on the unauthorized release of 251,287 diplomatic documents and on previous extensive releases of classified documents on the events in Iraq and Afghanistan, one could be forgiven for thinking massive leaks are really an issue for governments, but it is not: It is an issue for corporations as well. In fact, corporations are apparently set to be the next big target for things like Wikileaks. Just the threat of such a release against one corporation recently caused the price of their stock to drop 3% after the leak organization claimed to have 5GB of information from inside the company, with the implication that it might be damaging or embarrassing information. At the moment of this blog anyway, we don't know yet if that is true or how they got the information but how did the diplomatic cable leak happen? For the diplomatic cables, according to press reports, a private in the military, with some appropriate level of security clearance (that is, he apparently had the correct level of security clearance to be accessing the information...he reportedly didn't "hack" his way through anything to get to the documents which might have raised some red flags...), is accused of accessing the material and copying it onto a writeable CD labeled "Lady Gaga" and walking out the door with it. Upload and... Done. In the same article, the accused is quoted as saying "Information should be free. It belongs in the public domain." Now think about all the confidential information in your company or non-profit... from credit card information, to phone records, to customer or donor lists, to corporate strategy documents, product cost information, etc, etc.... And then think about that last quote above from what was a very junior level person in the organization...still feeling comfortable with your ability to control all your information? So what can you do to guard against these types of breaches where there is no outsider (or even insider) intrusion to detect per se, but rather someone with malicious intent is physically walking out the door with data that they are otherwise allowed to access in their daily work? A major first step it to make it physically, logistically much harder to walk away with the information. If the user with malicious intent has no way to copy to removable or moble media (USB sticks, thumb drives, CDs, DVDs, memory cards, or even laptop disk drives) then, as a practical matter it is much more difficult to physically move the information outside the firewall. But how can you control access tightly and reliably and still keep your hundreds or even thousands of users productive in their daily job? Oracle Desktop Virtualization products can help.Oracle's comprehensive suite of desktop virtualization and access products allow your applications and, most importantly, the related data, to stay in the (highly secured) data center while still allowing secure access from just about anywhere your users need to be to be productive.  Users can securely access all the data they need to do their job, whether from work, from home, or on the road and in the field, but fully configurable policies set up centrally by privileged administrators allow you to control whether, for instance, they are allowed to print documents or use USB devices or other removable media.  Centrally set policies can also control not only whether they can download to removable devices, but also whether they can upload information (see StuxNet for why that is important...)In fact, by using Sun Ray Client desktop hardware, which does not contain any disk drives, or removable media drives, even theft of the desktop device itself would not make you vulnerable to data loss, unlike a laptop that can be stolen with hundreds of gigabytes of information on its disk drive.  And for extreme security situations, Sun Ray Clients even come standard with the ability to use fibre optic ethernet networking to each client to prevent the possibility of unauthorized monitoring of network traffic.But even without Sun Ray Client hardware, users can leverage Oracle's Secure Global Desktop software or the Oracle Virtual Desktop Client to securely access server-resident applications, desktop sessions, or full desktop virtual machines without persisting any application data on the desktop or laptop being used to access the information.  And, again, even in this context, the Oracle products allow you to control what gets uploaded, downloaded, or printed for example.Another benefit of Oracle's Desktop Virtualization and access products is the ability to rapidly and easily shut off user access centrally through administrative polices if, for example, an employee changes roles or leaves the company and should no longer have access to the information.Oracle's Desktop Virtualization suite of products can help reduce operating expense and increase user productivity, and those are good reasons alone to consider their use.  But the dynamics of today's world dictate that security is one of the top reasons for implementing a virtual desktop architecture in enterprises.For more information on these products, view the webpages on www.oracle.com and the Oracle Technology Network website.

    Read the article

  • SQLAuthority News – Learning Trip – Traveling to Learn SQL Server

    - by pinaldave
    I am currently traveling to Delhi to learn SQL Server in person from my friend. You can read more details about why am I learning SQL Server.  I have signed up for the course End to End SQL Server Business Intelligence at Koenig Solutions. Yesterday I blogged about my registration experience and today I am going to write about my  experience once I arrived at Delhi. From Ahmedabad to Delhi I stay with my wife and daughter in Bangalore (IT Hub of India), my hometown is Ahmedabad. My parents stay in city nearby Ahmedabad. I decided to spend few days with my folks before I sign up for 3 days of solid learning. I had selected an early morning flight to Delhi. I landed at 8:30 AM in Delhi. As soon as I checked email in my mobile I was really glad that I had received details of my pick up vehicle from Koenig. I walked out of the airport and I noticed that a driver was waiting with a placard with my name and photo associated with it. He was in Koenig uniform so there was no chance to make mistakes. In minutes of landing in Delhi I was in my transport heading to the Koenig Training Center. After the quick introduction driver handed me a bag (to be precise Eco friendly bag). The bag contained following items: My registration form All necessary documents in print which I had received earlier A Printed Book of the course next day INR 1000 (What?) I was glad to receive the bag but I was very confused with the Rs 1000. I decided to figure this out once I reach to the training center. Arriving at Koenig Inn Deluxe Koenig registration fees include all the stay and meals. I had opted for Koenig Inn Deluxe as my stay as it was recommended by my friend as well it was the right economical choice for me. When I reached to my accommodation, they were well aware of my arrival and was immediately led to my spacious room. The room is well equipped with all the amenities (hot water, air condition, coffee table, munching snacks,  and free internet) and the staff is very friendly. I immediately got ready as I had to go to Koenig Training Center to meet Center Head for a quick introduction. Koenig Inn Delux Koenig Training Center The training center is within five minutes of distance from the accommodation. I was lead to center head right away and had a very meaningful conversation with Ms Hema regarding my learning goals. She gave me a quick tour of the training center. I was amazed with the numbers of lab rooms they have in the center. The labs are spacious and give the most needed hand’s on experience to the users. I was led to the lab where I was suppose to learn my class the very next day as well I was provided my trainer’s profile. Mystery of Rs 1000 Well, after all this I have still not forgotten why I was provided Rs 1000 when arrived at the airport. When I asked about that I was told that because many students comes from foreign places and they may not have Indian Currency when they land at airport. This was for their immediate consumption till they arrive at the training center. Later on they can get their currency converted to local currency at Koenig Travel Desk. My curiosity was satisfied but I had not expected this answer. I am amazed at the attention to the details. Koenig Travel Desk When I heard about Koenig Travel Desk, I remembered that I have few friends in Delhi and Gurgaon. I had completed all of the formalities so I had reset of the day on my hand. I requested the travel desk if they can arrange a day cab for me so I can visit my friends in Guragon. Within 10 minutes I was on my way to Gurgaon. Telerik India Office Visit What did I do in Guragaon? I met my friends Abhishek Kant, Dhananjay Kumar and Amit Chowdhary. I visited Telerik India office and we had an excellent conversation on various aspects of technology and community. The Telerik India office is very spacious and Abhishek Kant (Telerik India Country Manager) gave us a quick tour of the office. We had an excellent lunch and dinner. One thing is for sure – the day was well spent. Pinal Dave, Dhananjay Kumar and Abhishek Kant Later evening I returned to my accommodation and decided to read up a few of the topics which I was going to learn next day. In tomorrow’s blog post I will discuss about my learning experience. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Training, T SQL, Technology

    Read the article

< Previous Page | 743 744 745 746 747 748 749 750 751 752 753 754  | Next Page >