Search Results

Search found 37714 results on 1509 pages for 'database documentation'.

Page 625/1509 | < Previous Page | 621 622 623 624 625 626 627 628 629 630 631 632  | Next Page >

  • Which tools you use to make gtk themes?

    - by tutuca
    I'm trying to make a new gtk theme using the murrine engine, using Humanity (default in ubuntu 9.10) as a template. You can grab the code in http://github.com/tutuca/themes However, I found cumbersome the process of creating a new theme with it. There is no central starting point. The documentation of both, the engine options (gtkrc's and stuff), and general theming practices (the format of the index.theme files, folders, bla bla) is scarce, How to's and tutorials are often old or subject to lots of opinionated debate and results confusing (to me, having a web developer background, at least :-). So... I wanted to ask to the fellows gtk themers and artist out there: Which tools you use to create a new theme, and how does your average workflow looks like?

    Read the article

  • Problems detecting android tablet IView 788TPC with mtp-tools

    - by Elder Geek
    I installed mtp-tools on 14.04 "Trusty" through Software Center. No problems with install. Issuing `mtp-detect' results in 'Unable to open ~/.mtpz-data for reading, MTPZ disabled.libmtp version: 1.1.6 Listing raw device(s) No raw devices found.' I did some research and found that mtpfs might be required. so installed that with :~$ sudo apt-get install mtpfs I still get the following result :~$ mtp-detect Unable to open ~/.mtpz-data for reading, MTPZ disabled.libmtp version: 1.1.6 Listing raw device(s) No raw devices found. My research indicates that the mtp-tools package is still under development. source: http://libmtp.sourceforge.net/ and the documentation is not comprehensive. source man mtp-tools as well as mtp-detect -h I tried adding the PPA from Are there any plans to improve mtp support on future Ubuntu releases? but it seems this is already worked into trusty 14.04 and won't resolve the problem. Can anyone provide a recommended course of action to resolve this problem?

    Read the article

  • Automatic Storage Management (ASM)

    - by jean-marc.gaudron(at)oracle.com
    Master Note for Automatic Storage Management (ASM) (Doc ID 1187723.1)This Master Note is intended to provide an index and references to the most frequently used My Oracle Support Notes with respect to Oracle Automatic Storage Management (ASM) environments. This Master Note is subdivided into categories to allow for easy access and reference to notes that are applicable to your area of interest. This includes the following categories: Automatic Storage Management (ASM) Concepts and Overview Automatic Storage Management (ASM) Installation Automatic Storage Management (ASM) Configuration Automatic Storage Management (ASM) Administration Automatic Storage Management (ASM) Migration and Upgrade Automatic Storage Management (ASM) Monitoring Automatic Storage Management (ASM) Troubleshooting and Debugging Automatic Storage Management (ASM) Best Practices Automatic Storage Management (ASM) Versions and Patches ASMLIB Database Machine, Exadata Storage Server and RAC Documentation Using My Oracle Support Effectively

    Read the article

  • Best practices when loading images for improving page loading speed

    - by Naoise Golden
    I am working on optimizing a page's loading speed. Here are some analytics: Notice how the images, although only accounting for 65% of the total size (1.1MB), are by far the slowest loading assets: 96% of time. I'd like to know which are the recommended practices on optimizing loading speed, only taking images into account. Some of the techniques we are already applying: image compression images hosted on cookieless domain and CDN spriting everything that can be sprited http headers: keep alive and Expires to one year. Disclaimer: I have gone through the available documentation, I think by focusing on image loading optimization I am not creating a duplicate or a subjective question.

    Read the article

  • Scrum: How to work on one story at a time

    - by Juergen
    I was nominated as scrum master in a new formed scrum team. We have already done some sprints. In the beginning I tried to make my team to work on one story at a time. But it didn't work. My team had difficulties to distribute the tasks in a way that they can work simultaneously on one story. Maybe we are doing something wrong? For example: we have a story to create a new dialog. We create the following tasks: Create Model classes Read model data from database Connect model classes with view Implement dialog handling Save data on close Test Documentation Solution Description Can theses tasks be done by more than one person at a time? The tasks - more or less - build upon each other. Or do we design the tasks in a wrong way?

    Read the article

  • How Do I interpret HDD S.M.A.R.T Results?

    - by Marty
    My laptop has recently started to become a bit unreliable, and for some reason I started to suspect that my HDD was starting to fail. After a bit of hunting on the internet, I found Ubuntu's Disk Utility in the System menu and ran the long SMART diagnostics from this. However, since the documentation for Disk Utility is very poor (palimpsest?), I'm not sure how to interpret the results: For example, the Read Error Rate is over 50 million (!), yet the Assessment is rated "Good". So would someone mind explaining to me how to interpret the results of these tests (especially the Normalized, Worst, Threshold and Value numbers)? And maybe tell me what they think of the results I got for my HDD? (Thanks)

    Read the article

  • Error java.lang.OutOfMemoryError: getNewTla using Oracle EPM products

    - by Marc Schumacher
    Running into a Java out of memory error, it is very common behaviour in the field that the Java heap size will be increased. While this might help to solve a heap space out of memory error, it might not help to fix an out of memory error for the Thread Local Area (TLA). Increasing the available heap space from 1 GB to 16 GB might not even help in this situation. The Thread Local Area (TLA) is part of the Java heap, but as the name already indicates, this memory area is local to a specific thread so there is no need to synchronize with other threads using this memory area. For optimization purposes the TLA size is configurable using the Java command line option “-XXtlasize”. Depending on the JRockit version and the available Java heap, the default values vary. Using Oracle EPM System (mainly 11.1.2.x) the following setting was tested successfully: -XXtlasize:min=8k,preferred=128k More information about the “-XXtlasize” parameter can be found in the JRockit documentation: http://docs.oracle.com/cd/E13150_01/jrockit_jvm/jrockit/jrdocs/refman/optionXX.html

    Read the article

  • Announcing: Oracle Solaris Cluster Product Bulletin, May 2014

    - by uwes
    New qualifications announcements and general news for Oracle Solaris Cluster products can be found in the new Product Bulletin Hardware Qualifications New Oracle ZFS Storage Appliance Kit versions with Oracle Solaris Cluster 4.1 geographic cluster Pillar AXIOM 600 with Oracle Solaris Cluster 4.1 on x86 Software Qualifications SAP Livecache 7.9 and MAXDB 7.9 Oracle Weblogic Server 12.1.2 Latest Support Information Oracle Solaris Cluster 4.1 SRU 7 (4.1.7) Oracle Solaris Cluster 3.3 3/13 patch train #5 Resources Configuration guides and documentation Product Update Bulletin Archives Contacts Please read the Product Bulletin on Oracle HW TRC for more details. (If you are not registered on Oracle HW TRC, click here ... and follow the instructions..) _____________________________________________________________________ For More Information Go To:Oracle.com Oracle Solaris Cluster page Oracle Technology Network Oracle Solaris Cluster pageOracle Solaris Cluster MOS communityPartner web Oracle Solaris Cluster pageOracle Solaris Cluster Blog

    Read the article

  • Just graduated from the finest graduates of the NeoSmart Technologies Institute of BCD Learning.

    - by Leonard Mwangi
    My laptop housing my life in entirety decided to go south this last weekend, cant blame it because I wanted it to do things that not even normal people :)  but it corrupted and could not get it going no matter what I tried. Imagining what I would loose (my POC's, Couple of VM's and lots and lots of documentation) pushed me to try anything short of reinstall. After toiling for about an hour looking for solution, I ended up at  http://neosmart.net/wiki/display/EBCD/Recovering+the+Vista+Bootloader+from+the+DVD where the last solution is the one that worked for me. I am glad to be functional again :) Thanks NeoSmart guys

    Read the article

  • My Right-to-Left Foot (T-SQL Tuesday #13)

    - by smisner
    As a business intelligence consultant, I often encounter the situation described in this month's T-SQL Tuesday, hosted by Steve Jones ( Blog | Twitter) – “What the Business Says Is Not What the  Business Wants.” Steve posed the question, “What issues have you had in interacting with the business to get your job done?” My profession requires me to have one foot firmly planted in the technology world and the other foot planted in the business world. I learned long ago that the business never says exactly what the business wants because the business doesn't have the words to describe what the business wants accurately enough for IT. Not only do technological-savvy barriers exist, but there are also linguistic barriers between the two worlds. So how do I cope? The adage "a picture is worth a thousand words" is particularly helpful when I'm called in to help design a new business intelligence solution. Many of my students in BI classes have heard me explain ("rant") about left-to-right versus right-to-left design. To understand what I mean about these two design options, let's start with a picture: When we design a business intelligence solution that includes some sort of traditional data warehouse or data mart design, we typically place the data sources on the left, the new solution in the middle, and the users on the right. When I've been called in to help course-correct a failing BI project, I often find that IT has taken a left-to-right approach. They look at the data sources, decide how to model the BI solution as a _______ (fill in the blank with data warehouse, data mart, cube, etc.), and then build the new data structures and supporting infrastructure. (Sometimes, they actually do this without ever having talked to the business first.) Then, when they show what they've built to the business, the business says that is not what we want. Uh-oh. I prefer to take a right-to-left approach. Preferably at the beginning of a project. But even if the project starts left-to-right, I'll do my best to swing it around so that we’re back to a right-to-left approach. (When circumstances are beyond my control, I carry on, but it’s a painful project for everyone – not because of me, but because the approach just doesn’t get to what the business wants in the most effective way.) By using a right to left approach, I try to understand what it is the business is trying to accomplish. I do this by having them explain reports to me, and explaining the decision-making process that relates to these reports. Sometimes I have them explain to me their business processes, or better yet show me their business processes in action because I need pictures, too. I (unofficially) call this part of the project "getting inside the business's head." This is starting at the right side of the diagram above. My next step is to start moving leftward. I do this by preparing some type of prototype. Depending on the nature of the project, this might mean that I simply mock up some data in a relational database and build a prototype report in Reporting Services. If I'm lucky, I might be able to use real data in a relational database. I'll either use a subset of the data in the prototype report by creating a prototype database to hold the sample data, or select data directly from the source. It all depends on how much data there is, how complex the queries are, and how fast I need to get the prototype completed. If the solution will include Analysis Services, then I'll build a prototype cube. Analysis Services makes it incredibly easy to prototype. You can sit down with the business, show them the prototype, and have a meaningful conversation about what the BI solution should look like. I know I've done a good job on the prototype when I get knocked out of my chair so that the business user can explore the solution further independently. (That's really happened to me!) We can talk about dimensions, hierarchies, levels, members, measures, and so on with something tangible to look at and without using those terms. It's not helpful to use sample data like Adventure Works or to use BI terms that they don't really understand. But when I show them their data using the BI technology and talk to them in their language, then they truly have a picture worth a thousand words. From that, we can fine tune the prototype to move it closer to what they want. They have a better idea of what they're getting, and I have a better idea of what to build. So right to left design is not truly moving from the right to the left. But it starts from the right and moves towards the middle, and once I know what the middle needs to look like, I can then build from the left to meet in the middle. And that’s how I get past what the business says to what the business wants.

    Read the article

  • Oracle Solaris 11.1 Blog Post Roundup

    - by Larry Wake
    Here are a few recent posts about the also-recent Oracle Solaris 11.1 release: Title Author What's New in Solaris 11.1? Karoly Vegh New ZFS Encryption features in Solaris 11.1 Darren Moffat Solaris 11.1: Encrypted Immutable Zones on (ZFS) Shared Storage Darren Moffat High Resolution Timeouts Steve Sistare Solaris 11.1: Changes to included FOSS packages Alan Coopersmith Documentation Changes in Solaris 11.1 Alan Coopersmith How to Update to Oracle Solaris 11.1 Usingthe Image Packaging System Peter Dennis svcbundle for easier SMF manifest creation Glynn Foster Controlling server configurations with IPS Bart Smallders You can also see Markus Weber's list of interesting posts about Oracle Solaris 11 from last year, or take a look at my shortcut on how to search for Solaris posts by tag. If that's not enough, don't forget to register for next Wednesday's Oracle Solaris 11.1 and Oracle Solaris Cluster 4.1 webcast with a live Q&A. It's November 7th, at 8 AM PT. The last time we did this, we got almost 300 questions, so for Wednesday, we're making sure we've got lots of engineers with fingers poised over their keyboards, ready for action.

    Read the article

  • Running a Mongo Replica Set on Azure VM Roles

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2013/10/15/running-a-mongo-replica-set-on-azure-vm-roles.aspxSetting up a MongoDB Replica Set with a bunch of Azure VMs is straightforward stuff. Here’s a step-by-step which gets you from 0 to fully-redundant 3-node document database in about 30 minutes (most of which will be spent waiting for VMs to fire up). First, create yourself 3 VM roles, which is the minimum number of nodes you need for high availability. You can use any OS that Mongo supports. This guide uses Windows but the only difference will be the mechanism for starting the Mongo service when the VM starts (Windows Service, daemon etc.) While the VMs are provisioning, download and install Mongo locally, so you can set up the replica set with the Mongo shell. We’ll create our replica set from scratch, doing one machine at a time (if you have a single node you want to upgrade to a replica set, it’s the same from step 3 onwards): 1. Setup Mongo Log into the first node, download mongo and unzip it to C:. Rename the folder to remove the version – so you have c:\MongoDB\bin etc. – and create a new folder for the logs, c:\MongoDB\logs. 2. Setup your data disk When you initialize a node in a replica set, Mongo pre-allocates a whole chunk of storage to use for data replication. It will use up to 5% of your data disk, so if you use a Windows VM image with a defsault 120Gb disk and host your data on C:, then Mongo will allocate 6Gb for replication. And that takes a while. Instead you can create yourself a new partition by shrinking down the C: drive in Computer Management, by say 10Gb, and then creating a new logical disk for your data from that spare 10Gb, which will be allocated as E:. Create a new folder, e:\data. 3. Start Mongo When that’s done, start a command line, point to the mongo binaries folder, install Mongo as a Windows Service, running in replica set mode, and start the service: cd c:\mongodb\bin mongod -logpath c:\mongodb\logs\mongod.log -dbpath e:\data -replSet TheReplicaSet –install net start mongodb 4. Open the ports Mongo uses port 27017 by default, so you need to allow access in the machine and in Azure. In the VM, open Windows Firewall and create a new inbound rule to allow access via port 27017. Then in the Azure Management Console for the VM role, under the Configure tab add a new rule, again to allow port 27017. 5. Initialise the replica set Start up your local mongo shell, connecting to your Azure VM, and initiate the replica set: c:\mongodb\bin\mongo sc-xyz-db1.cloudapp.net rs.initiate() This is the bit where the new node (at this point the only node) allocates its replication files, so if your data disk is large, this can take a long time (if you’re using the default C: drive with 120Gb, it may take so long that rs.initiate() never responds. If you’re sat waiting more than 20 minutes, start another instance of the mongo shell pointing to the same machine to check on it). Run rs.conf() and you should see one node configured. 6. Fix the host name for the primary – *don’t miss this one* For the first node in the replica set, Mongo on Windows doesn’t populate the full machine name. Run rs.conf() and the name of the primary is sc-xyz-db1, which isn’t accessible to the outside world. The replica set configuration needs the full DNS name of every node, so you need to manually rename it in your shell, which you can do like this: cfg = rs.conf() cfg.members[0].host = ‘sc-xyz-db1.cloudapp.net:27017’ rs.reconfig(cfg) When that returns, rs.conf() will have your full DNS name for the primary, and the other nodes will be able to connect. At this point you have a working database, so you can start adding documents, but there’s no replication yet. 7. Add more nodes For the next two VMs, follow steps 1 through to 4, which will give you a working Mongo database on each node, which you can add to the replica set from the shell with rs.add(), using the full DNS name of the new node and the port you’re using: rs.add(‘sc-xyz-db2.cloudapp.net:27017’) Run rs.status() and you’ll see your new node in STARTUP2 state, which means its initializing and replicating from the PRIMARY. Repeat for your third node: rs.add(‘sc-xyz-db3.cloudapp.net:27017’) When all nodes are finished initializing, you will have a PRIMARY and two SECONDARY nodes showing in rs.status(). Now you have high availability, so you can happily stop db1, and one of the other nodes will become the PRIMARY with no loss of data or service. Note – the process for AWS EC2 is exactly the same, but with one important difference. On the Azure Windows Server 2012 base image, the MongoDB release for 64-bit 2008R2+ works fine, but on the base 2012 AMI that release keeps failing with a UAC permission error. The standard 64-bit release is fine, but it lacks some optimizations that are in the 2008R2+ version.

    Read the article

  • How fast does a language get outdated?

    - by Dummy Derp
    I started learning Java recently. I started learning it using books that I picked up from the library, some that I bought, and here and there from Java documentation. The book that I use for Java was published in the year 2011. In 2012, Java8 will be released followed by Java9 in the year 2013. The questions are: How do I keep myself updated about developments in Java without having to buy a tome for Java8 and/or Java9 Is a book published in 2008 an outdated book for studying JSP and Servelets? I'm talking about Head First Servlets and JSP

    Read the article

  • Can't remove the libpcap0.8 package

    - by Yogesh
    I am getting error when running apt-get remove root@System:~/Downloads# sudo apt-get remove The following packages have unmet dependencies: libpcap0.8 : Breaks: libpcap0.8:i386 (!= 1.4.0-2) but 1.5.3-2 is installed libpcap0.8:i386 : Breaks: libpcap0.8 (!= 1.5.3-2) but 1.4.0-2 is installed libpcap0.8-dev : Depends: libpcap0.8 (= 1.5.3-2) but 1.4.0-2 is installed E: Unmet dependencies. Try using -f. and when I ran apt-get remove -f this is what happens: root@System:~/Downloads# sudo apt-get remove -f The following extra packages will be installed: libpcap0.8 The following packages will be upgraded: libpcap0.8 1 upgraded, 0 newly installed, 0 to remove and 365 not upgraded. 2 not fully installed or removed. Need to get 0 B/110 kB of archives. After this operation, 13.3 kB of additional disk space will be used. Do you want to continue? [Y/n] y (Reading database ... 163539 files and directories currently installed.) Preparing to unpack .../libpcap0.8_1.5.3-2_amd64.deb ... Unpacking libpcap0.8:amd64 (1.5.3-2) over (1.4.0-2) ... dpkg: error processing archive /var/cache/apt/archives/libpcap0.8_1.5.3-2_amd64.deb (--unpack): trying to overwrite shared '/usr/share/man/man7/pcap-filter.7.gz', which is different from other instances of package libpcap0.8:amd64 dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) Processing triggers for man-db (2.6.7.1-1) ... Errors were encountered while processing: /var/cache/apt/archives/libpcap0.8_1.5.3-2_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) root@System:~/Downloads# clear root@System:~/Downloads# sudo apt-get remove -f Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: libpcap0.8 The following packages will be upgraded: libpcap0.8 1 upgraded, 0 newly installed, 0 to remove and 365 not upgraded. 2 not fully installed or removed. Need to get 0 B/110 kB of archives. After this operation, 13.3 kB of additional disk space will be used. Do you want to continue? [Y/n] y (Reading database ... 163539 files and directories currently installed.) Preparing to unpack .../libpcap0.8_1.5.3-2_amd64.deb ... Unpacking libpcap0.8:amd64 (1.5.3-2) over (1.4.0-2) ... dpkg: error processing archive /var/cache/apt/archives/libpcap0.8_1.5.3-2_amd64.deb (--unpack): trying to overwrite shared '/usr/share/man/man7/pcap-filter.7.gz', which is different from other instances of package libpcap0.8:amd64 dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) Processing triggers for man-db (2.6.7.1-1) ... Errors were encountered while processing: /var/cache/apt/archives/libpcap0.8_1.5.3-2_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) root@System:~/Downloads# root@System:~/Downloads# sudo apt-get check Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these. The following packages have unmet dependencies: libpcap0.8 : Breaks: libpcap0.8:i386 (!= 1.4.0-2) but 1.5.3-2 is installed libpcap0.8:i386 : Breaks: libpcap0.8 (!= 1.5.3-2) but 1.4.0-2 is installed libpcap0.8-dev : Depends: libpcap0.8 (= 1.5.3-2) but 1.4.0-2 is installed E: Unmet dependencies. Try using -f. root@System:~/Downloads# apt-cache policy libpcap0.8:amd64 libpcap0.8 libpcap0.8-dev libpcap0.8: Installed: 1.4.0-2 Candidate: 1.5.3-2 Version table: 1.5.3-2 0 500 http://in.archive.ubuntu.com/ubuntu/ trusty/main amd64 Packages *** 1.4.0-2 0 100 /var/lib/dpkg/status libpcap0.8: Installed: 1.4.0-2 Candidate: 1.5.3-2 Version table: 1.5.3-2 0 500 http://in.archive.ubuntu.com/ubuntu/ trusty/main amd64 Packages *** 1.4.0-2 0 100 /var/lib/dpkg/status libpcap0.8-dev: Installed: 1.5.3-2 Candidate: 1.5.3-2 Version table: *** 1.5.3-2 0 500 http://in.archive.ubuntu.com/ubuntu/ trusty/main amd64 Packages 100 /var/lib/dpkg/status root@System:~/Downloads# root@System:~/Downloads# sudo apt-get -f remove libpcap0.8 libpcap0.8-dev libpcap0.8-dev:i386 libpcap0.8:i386 Reading package lists... Done Building dependency tree Reading state information... Done Package 'libpcap0.8-dev:i386' is not installed, so not removed. Did you mean 'libpcap0.8-dev'? You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: libpcap-dev : Depends: libpcap0.8-dev but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). root@System:~/Downloads# sudo apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: libpcap0.8 The following packages will be upgraded: libpcap0.8 1 upgraded, 0 newly installed, 0 to remove and 365 not upgraded. 2 not fully installed or removed. Need to get 0 B/110 kB of archives. After this operation, 13.3 kB of additional disk space will be used. Do you want to continue? [Y/n] y (Reading database ... 163539 files and directories currently installed.) Preparing to unpack .../libpcap0.8_1.5.3-2_amd64.deb ... Unpacking libpcap0.8:amd64 (1.5.3-2) over (1.4.0-2) ... dpkg: error processing archive /var/cache/apt/archives/libpcap0.8_1.5.3-2_amd64.deb (--unpack): trying to overwrite shared '/usr/share/man/man7/pcap-filter.7.gz', which is different from other instances of package libpcap0.8:amd64 dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) Processing triggers for man-db (2.6.7.1-1) ... Errors were encountered while processing: /var/cache/apt/archives/libpcap0.8_1.5.3-2_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) root@System:~/Downloads#

    Read the article

  • difference in using third party social buttons and directly integrating each social buttons ourselves

    - by Jayapal Chandran
    I wanted to add specific social buttons to my article. I used ShareThis. It gives a facebook like button, google plus button, etc... by default. were as in other articles of different modules i had integrated the facebook like by myself by following the documentation (including markup in the head section) What is the difference in adding manually with many markups and using third party code? Will that affect SEO or any other advantage over the respective social networking site (here for example facebook and google plus)?

    Read the article

  • How is time calculation performed by a computer?

    - by Jorge Mendoza
    I need to add a certain feature to a module in a given project regarding time calculation. For this specific case I'm using Java and reading through the documentation of the Date class I found out the time is calculated in milliseconds starting from January 1, 1970, 00:00:00 GMT. I think it's safe to assume there is a similar "starting date" in other languages so I guess the specific implementation in Java doesn't matter. How is the time calculation performed by the computer? How does it know exactly how many milliseconds have passed from that given "starting date and time" to the current date and time?

    Read the article

  • detecting when you are going to reach your hit limit for Google Analytics free account

    - by crmpicco
    I am a user of a free Google Analytics account and i'm slightly concerned that I may be approaching the 10,000,000 hit (Pageviews, Events etc) per month. Google state in their documentation: These limits apply to the Web Property / Property / Tracking ID. 10 million hits per month per property If you go over this limit, the Google Analytics team might contact you and ask you upgrade to Premium or implement client sampling to reduce the amount of data being sent to Google Analytics. However, I note that there is nothing to say that you can review or check up on your current usage for the month. I have administrator access to the Google Analytics account, but I see no feature that lets me check up on my monthly usage. I don't know if Google offer this, either by means of the admin interface or via their support channels - but it would certainly be a useful feature. Is there anyway for a free GA user to obtain this information?

    Read the article

  • SQLAuthority News – Great Time Spent at Great Indian Developers Summit 2014

    - by Pinal Dave
    The Great Indian Developer Conference (GIDS) is one of the most popular annual event held in Bangalore. This year GIDS is scheduled on April 22, 25. I will be presented total four sessions at this event and each session is very different from each other. Here are the details of four of my sessions, which I presented there. Pluralsight Shades This event was a great event and I had fantastic fun presenting a technology over here. I was indeed very excited that along with me, I had many of my friends presenting at the event as well. I want to thank all of you to attend my session and having standing room every single time. I have already sent resources in my newsletter. You can sign up for the newsletter over here. Indexing is an Art I was amazed with the crowd present in the sessions at GIDS. There was a great interest in the subject of SQL Server and Performance Tuning. Audience at GIDS I believe event like such provides a great platform to meet and share knowledge. Pinal at Pluralsight Booth Here are the abstract of the sessions which I had presented. They were recorded so at some point in time they will be available, but if you want the content of all the courses immediately, I suggest you check out my video courses on the same subject on Pluralsight. Indexes, the Unsung Hero Relevant Pluralsight Course Slow Running Queries are the most common problem that developers face while working with SQL Server. While it is easy to blame SQL Server for unsatisfactory performance, the issue often persists with the way queries have been written, and how Indexes has been set up. The session will focus on the ways of identifying problems that slow down SQL Server, and Indexing tricks to fix them. Developers will walk out with scripts and knowledge that can be applied to their servers, immediately post the session. Indexes are the most crucial objects of the database. They are the first stop for any DBA and Developer when it is about performance tuning. There is a good side as well evil side to indexes. To master the art of performance tuning one has to understand the fundamentals of indexes and the best practices associated with the same. We will cover various aspects of Indexing such as Duplicate Index, Redundant Index, Missing Index as well as best practices around Indexes. SQL Server Performance Troubleshooting: Ancient Problems and Modern Solutions Relevant Pluralsight Course Many believe Performance Tuning and Troubleshooting is an art which has been lost in time. However, truth is that art has evolved with time and there are more tools and techniques to overcome ancient troublesome scenarios. There are three major resources that when bottlenecked creates performance problems: CPU, IO, and Memory. In this session we will focus on High CPU scenarios detection and their resolutions. If time permits we will cover other performance related tips and tricks. At the end of this session, attendees will have a clear idea as well as action items regarding what to do when facing any of the above resource intensive scenarios. Developers will walk out with scripts and knowledge that can be applied to their servers, immediately post the session. To master the art of performance tuning one has to understand the fundamentals of performance, tuning and the best practices associated with the same. We will discuss about performance tuning in this session with the help of Demos. Pinal Dave at GIDS MySQL Performance Tuning – Unexplored Territory Relevant Pluralsight Course Performance is one of the most essential aspects of any application. Everyone wants their server to perform optimally and at the best efficiency. However, not many people talk about MySQL and Performance Tuning as it is an extremely unexplored territory. In this session, we will talk about how we can tune MySQL Performance. We will also try and cover other performance related tips and tricks. At the end of this session, attendees will not only have a clear idea, but also carry home action items regarding what to do when facing any of the above resource intensive scenarios. Developers will walk out with scripts and knowledge that can be applied to their servers, immediately post the session. To master the art of performance tuning one has to understand the fundamentals of performance, tuning and the best practices associated with the same. You will also witness some impressive performance tuning demos in this session. Hidden Secrets and Gems of SQL Server We Bet You Never Knew Relevant Pluralsight Course SQL Trio Session! It really amazes us every time when someone says SQL Server is an easy tool to handle and work with. Microsoft has done an amazing work in making working with complex relational database a breeze for developers and administrators alike. Though it looks like child’s play for some, the realities are far away from this notion. The basics and fundamentals though are simple and uniform across databases, the behavior and understanding the nuts and bolts of SQL Server is something we need to master over a period of time. With a collective experience of more than 30+ years amongst the speakers on databases, we will try to take a unique tour of various aspects of SQL Server and bring to you life lessons learnt from working with SQL Server. We will share some of the trade secrets of performance, configuration, new features, tuning, behaviors, T-SQL practices, common pitfalls, productivity tips on tools and more. This is a highly demo filled session for practical use if you are a SQL Server developer or an Administrator. The speakers will be able to stump you and give you answers on almost everything inside the Relational database called SQL Server. I personally attended the session of Vinod Kumar, Balmukund Lakhani, Abhishek Kumar and my favorite Govind Kanshi. Summary If you have missed this event here are two action items 1) Sign up for Resource Newsletter 2) Watch my video courses on Pluralsight Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, SQLAuthority News, T SQL Tagged: GIDS

    Read the article

  • What are the advantages and challenges of using HaXe over ActionScript 3?

    - by Arthur Wulf White
    I read this: http://webr3.org/blog/haxe/bitmapdata-vectors-bytearrays-and-optimization/ And this: http://stackoverflow.com/questions/5220045/what-are-the-pro-and-cons-of-using-haxe-over-actionscript-3 http://www.haxenme.org/developers/documentation/actionscript-developers/ I read there are only two books, is that right? http://haxe.org/doc/book I also see that using FlashDevelop you can make an Adobe Air project with HaXe and compile to exe. And begun to wonder from a game making perspective: I saw the example in the first link, are there any additional advantages in performance when using HaXe instead of AS3 for games development (for the web)?

    Read the article

  • Beginner Geek: How to Use Multiple Monitors to Be More Productive

    - by Chris Hoffman
    Many people swear by multiple monitors, whether they’re geeks or just people who need to be productive. Why use just one monitor when you can use two or more and see more at once? Additional monitors allow you to expand your desktop, getting more screen real estate for your open programs. Windows makes it very easy to set up additional monitors, and your computer probably has the necessary ports. Why Use Multiple Monitors? Multiple monitors give you more screen real estate. Hook up multiple monitors to a computer and you can move your mouse back and forth between them, dragging programs between monitors as if you had an extra-large desktop. People who swear by multiple monitors use them to display multiple things on-screen at a time. Rather than Alt+Tabbing and task switching to glance at another window, you can just look over with your eyes and then look back to the program you’re using. Some examples of use cases for multiple monitors include: Coders who want to view their code on one display with the other display reserved for documentation. They can just glance over at the documentation and look back at their primary workspace. Anyone who needs to view something while working. Viewing a web page while writing an email, viewing another document while writing an something, or working with two large spreadsheets and having both visible at once. People who need to keep an eye on information, whether it’s email or up-to-date statistics, while working. Gamers who want to see more of the game world, extending the game across multiple displays. Geeks who just want to watch a video on one screen while doing something else on the other screen. Hooking Up Multiple Monitors Hooking up an additional monitor to your computer should be very simple. Most new computers come with more than one port for a monitor — whether DVI, HDMI, the older VGA port, or a mix. Some computers may include splitter cables that allow you to connect multiple monitors to a single port. Most laptops also come with ports that allow you to hook up an external monitor. Plug a monitor into your laptop’s DVI or VGA port and Windows will allow you to use both your laptop’s integrated display and the external monitor at once. This all depends on the ports your computer has and how your monitor connects. If you have an old VGA monitor lying around and you have a modern laptop with only DVI or HDMI connectors, you may need an adapter that allows you to plug your monitor’s VGA cable into the new port. Be sure to take your computer’s ports into account before you get another monitor for it. Managing Multiple Monitors With Windows Windows makes using multiple monitors easy. Just plug the monitor into the appropriate port on your computer and Windows should automatically extend your desktop onto it. You can now just drag and drop windows between monitors. To control how this works, right-click your Windows desktop and select Screen resolution. Choose an option from the Multiple displays box. The Extend option extends your desktop onto an additional monitor, while the other options are mainly useful if you’re using an additional monitor for presentations — for example, you could mirror your laptop’s desktop onto a large monitor or blank your laptop’s screen while it’s connected to a larger display. Be sure to arrange your monitors properly so Windows understands how your monitors are physically positioned. Windows 8 allows you to extend your Windows taskbar across multiple monitors. You’ll find this option in the taskbar’s options window — right-click the taskbar and select Properties. You can also choose where you want Windows to display taskbar buttons for open programs — on any monitor’s taskbar or only on the taskbar on the associated monitor. Windows 7 doesn’t have these convenient features built-in — your second monitor won’t have a taskbar. To extend your taskbar onto an additional monitor, you’ll need a third-party utility like the free and open-source Dual Monitor Taskbar. If you just have a single monitor, you can also use the Aero Snap feature to quickly place multiple Windows applications side by side. On Windows 7 or 8, press Windows Key + Left or Windows Key + Right to make the current window take up the left or right half of your display. You could also drag any window’s title bar to the left or right edges of your screen and release the window. How useful this feature is depends on your monitor’s size and resolution. If you have a large, high-resolution monitor, it will allow you to see a lot. If you have a smaller laptop monitor with the seemingly standard 1366×768 resolution, you won’t be able to see much of each snapped window at once, so snapping windows may not be practical. Image Credit: Chance Reecher on Flickr, Camp Atterbury Joint Maneuver Training Center on Flickr, Xavier Caballe on Flickr     

    Read the article

  • Graffiti is a Sinatra-inspired Groovy Framework

    - by kerry
    Playing around with Sinatra the other day and realized I could really use something like this for Groovy. Thus, Graffiti was born. It’s basically a thin wrapper around Jetty. At first, I thought I might write my own server for it (everybody needs to do that once, don’t they?), but decided to invoke the ’simplest thing that could possibly work’ principle. Here is the requisite ‘Hello World’ example: import graffiti.* @Grab('com.goodercode:graffiti:1.0-SNAPSHOT') @Get('/helloworld') def hello() { 'Hello World' } Graffiti.serve this The code, plus more documentation is hosted under my github account.

    Read the article

  • How to make rigid bodies collide with Apex Clothing in PhysX for Maya

    - by b1nary.atr0phy
    According to the [Apex] Clothing Overview section of the documentation: Colliding with Rigid Bodies Rigid bodies present in your scene will push clothing around roughly as you might expect. Well, I beg to differ. The Apex Cloth collides with the floor just fine, but that's about the only thing it collides with (unless I add ragdoll to the same skeleton that the cloth is attached to.) So for example, if I try to bounce a ball (dynamic rigid body) into the cloth, it simply bounces through it. If I try to walk an actor with ragdoll through it, he simply clips through it as well. Anyone have any insight on this?

    Read the article

  • Is it true that the Google Spider gives the most relevance of a search result to the first 68 characters of the <title>?

    - by leeand00
    I am reading documentation about my CMS and it states that an HTML page <title> tag is really important in SEO. It states that the Google Spider gives the most relevance to the first 68 characters of a site title. (68 characters being the number of characters that Google will display in it's search engine result pages,) Can anyone verify this is still true? I read in The Information Diet that content farms were getting too good at gaming Google's algorithm for collecting and posting SERPs and so google had to change the search algorithm.

    Read the article

  • A tour of the GlassFish 3.1.2 DCOM support

    - by alexismp
    While we've mentioned the DCOM support in GlassFish 3.1.2 several times before, you'll probably find Byron's DCOM blog entry to be useful if you're using Windows as a deployment platform for your GlassFish cluster. Byron discusses how DCOM is used to communicate with remote Windows nodes participating in a GlassFish cluster, what Java libraries were used to wrap around DCOM, what new asadmin commands were addd (in particular validate-dcom) as well as some tips to make this all work on your specific environment. In addition to this blog post, you should considering reading the official product documentation : • Considerations for Using DCOM for Centralized Administration • Setting Up DCOM and Testing the DCOM Set Up

    Read the article

  • Qt question: hard-coded shortcuts

    - by miLo
    I have asked this question on several places but I still can't figure it out. What I am trying to do is to have a QKeySequence(Qt::CTRL + Qt::Key_X, Qt::CTRL + Qt::Key_C) in a MainWindow with QTextEdit as a central widget. The problem is that I have a shorcut for Cut(Ctrl+X) and when I press Ctrl+X,Ctrl+C it doesn't work. When the focus is on diffrent widget the shorcut works perfectly. I tried with overriding the QWidget::keyPressEvent and QWidget::event but it is the same. I have one more question: if I have these two shorcuts Ctrl+X and Ctrl+XCtrl+C why I don't receive the signal activatedAmbigiously() when I press Ctrl+X? According to the Qt documentation When a key sequence is being typed at the keyboard, it is said to be ambiguous as long as it matches the start of more than one shortcut.

    Read the article

< Previous Page | 621 622 623 624 625 626 627 628 629 630 631 632  | Next Page >