Search Results

Search found 26704 results on 1069 pages for 'row size'.

Page 733/1069 | < Previous Page | 729 730 731 732 733 734 735 736 737 738 739 740  | Next Page >

  • How to determine the amount to spend per phrase on Adwords research?

    - by Anonymous -
    My company would like to start a PPC advertising campaign. Whilst I understand the concept and how to set everything up from a technical point of view, this is something I've never done before. Logically, we'd like to test out a wide range of keywords that we think would lead to conversions, which we've put together through brainstorming and with some help from Google's External Keyword Tool. Sub-question whilst I remember - am I correct in thinking that in Google's keyword tool, keywords that we think will perform well that have a low competition yet high monthly searches are good since there will be less advertisers, meaning our bid per click will be less? Is there a common benchmark or process of doing a round of tests with keywords? Should we wait for 100 clicks on each keyword, see which ones have lead to the most sales (or rather, sales that are sustainable with the cost per click of that keyword), then drop the ones which aren't converting and put that budget onto the converting keywords? We realistically have a few hundred keywords/phrases we would like to test, but spending $100 per keyword/phrase is going to work out as quite an expensive test. It would be nice to be able to spend $5-10 per phrase, but I don't think the sample size would be great enough to determine anything usefully reliable. Another approach might be to setup all the keywords, and those that bring the most sales within x hours/days would be the ones we use. What is the common procedure with things like this? I know there are a plethora of companies that specialize in exactly this, but this is something we anticipate doing a lot in the future, so it would make sense to do it in house if at all possible.

    Read the article

  • Vantec NexStar NAS Encloser - Writing large files

    - by peter
    I have one of these 'Vantec NexStar LX - NST-475LX-BK' drive enclosures. It is a NAS device. When I write a file to the device using eSata, or a SMB share I cannot write files over 4GB. I think this is because the drive is formatted with FAT32. But when I access the device using FTP it doesn't matter. I can write files of any size. E.g. I wrote one on there last night which was 30GB. Does this make any sense? Why? I guess the most important thing for me is data integrity.

    Read the article

  • Ubuntu 12.04.1 LTS and Nvidia dirver (304.51) 64bit: problem 640x480

    - by nibianaswen
    I have a problem with this configuration: Asus K55V, Ubuntu 12.04 LTS and Nvidia driver 304.51. I have remove the nouveau driver with: apt-get --purge remove xserver-xorg-video-nouveau I installed the official nvidia driver (from www.nvidia.com) but when I reboot the PC the resolution of screen is only 640x480 and the monitor is resized. Mo solution at this problem if i change the xorg.conf. Now i have uninstall the nvidia driver and reinstall with sudo apt-get purge nvidia-current sudo apt-add-repository ppa:ubuntu-x-swat/x-updates sudo apt-get update sudo apt-get install nvidia-current When I reboot the screen resolution and size is OK, but if I start nvidia-setting I received the message: You do not appear to be using the NVIDIA X driver. and with command: sudo lshw -c display | grep driver I received configuration: driver=i915 latency=0 This sound like the system is using the Intel card. When I launch command lspci | grep VGA the output is: 00:02.0 VGA compatible controller: Intel Corporation Ivy Bridge Graphics Controller (rev 09) 01:00.0 VGA compatible controller: NVIDIA Corporation Device 1058 (rev ff) And there is no /etc/X11/xorg.conf. I have read a lot of guides on internet but without success.. How i can use nvidia card with the driver that i have installed?

    Read the article

  • What does dd conv=sync,noerror do?

    - by dding
    So what is the case when adding conv=sync,noerror makes a difference when backing up an entire hard disk onto an image file? Is conv=sync,noerror a requirement when doing forensic stuff? If so, why is it the case with reference to linux fedora? Edit: OK, so if I do dd without conv=sync,noerror, and dd encounters read error when reading the block (let's size 100M), does dd just skip 100M block and reads the next block without writing something (dd conv=sync,noerror writes zeros to 100M of output - so what about this case?)? And if is hash of original hard disk and output file different if done without conv=sync,noerror? Or is this only when read error occurred?

    Read the article

  • df command show no output

    - by user119720
    I'm running the linux distro on my server.When i want to verify the size of the disk, i'm issuing this commnand to get the output. df -h But it does not produce ANY output.Strangely enough when i'm issuing other command such as fdisk -l or du -h it can show output normally. Does anyone now why is this happening?Thanks. edit: here is the output of cat /etc/fstab none /dev/pts devpts rw 0 0 and this is for mount command none on /dev/pts type devpts (rw) none on /proc/sys/fs/binfmt_misc tpe binfmt_misc (rw) edit(2): here is the output of cat /proc/mounts /dev/vzfs / vzfs rw,relatime,usrquota,grpquota 0 0 proc /proc proc rw,relatime 0 0 sysfs /sys sysfs rw,relatime 0 0 none /dev/tmpfs rw,relatime 0 0 none /dev/pts devpts rw,relatime 0 0 none /proc/sys/fs/binfmt_misc binfmt_msc rw,relatime 0 0

    Read the article

  • PostgreSQL disaster recovery options

    - by Alex
    My customer has quite a large (the total "data" folder size is 200G) PostgreSQL database and we are working on a disaster recovery plan. We have identified three different types of disasters so far: hardware outage, too much load and unintentional data loss due to erroneously executed bad migration (like DELETE or ALTER TABLE DROP COLUMN). First two types seem to be easy to mitigate but we can't elaborate a good mitigation plan for the third type. I proposed to use ZFS and frequent (hourly) snapshots but "ZFS" means "OpenIndiana" these days and our Ops engineers do not have much expertise in it, so using OpenIndiana imposes another risk. Colleagues try to convince me that restoring from PostgreSQL PITR backup can be as fast as restoring from a ZFS snapshot but I highly doubt that replaying, say, 50G of archived WALs can be considered "fast". What other options are we missing? Is ZFS an only viable alternative? Can we get a fast Pg DB restore time in the Linux environment?

    Read the article

  • MPlayer refuses to generate mono wav file

    - by JCCyC
    I want to downsample an existing audio file to 8KHz mono. This command line downsamples it to stereo: mplayer -quiet -vo null -vc dummy -af volume=0,resample=8000:0:1 -ao pcm:waveheader:file="/tmp/blah1.wav" ~/from_my_cellphone.3ga It generates a file that the file utility identifies as stereo: $ file /tmp/blah1.wav /tmp/blah1.wav: RIFF (little-endian) data, WAVE audio, Microsoft PCM, 16 bit, stereo 8000 Hz Now, if I read the documentation correctly, I should add pan=1:0.5:0.5 so I get a file that's half the size: mplayer -quiet -vo null -vc dummy -af volume=0,resample=8000:0:1:pan=1:0.5:0.5 -ao pcm:waveheader:file="/tmp/blah2.wav" ~/from_my_cellphone.3ga But it doesn't! blah2.wav is identical to blah1.wav! What am I doing wrong?

    Read the article

  • Site-to-site VPN

    - by ronadona
    We are a small business company that is based in Sydney and opened a new office in London. Number of employees in Sydney office is 25 and in London is 6 employees. So the traffic isn't that high. Files to be transferred are Excel sheets with size of 15mb max. Both locations have MS server 2008 and Fortigate gateways. I set up a site to site vpn but it's extremely slow. Maybe this is because our upload speeds is 1Mbps only but We will increase the upload speed to 20 Mbps in both locations but I am afraid that this will not solve the problem as the 2 locations are far from each other and the upload upgrade won't solve the problem. what's the best way to go? Shall we find a provider for the VPN? or is there another technology that can be used through internet without paying extra costs? Many thanks!

    Read the article

  • PeopleSoft New Design Solves Navigation Problem

    - by Applications User Experience
    Anna Budovsky, User Experience Principal Designer, Applications User Experience In PeopleSoft we strive to improve User Experience on all levels. Simplifying navigation and streamlining access to the most important pages is always an important goal. No one likes to waste time waiting for pages to load and watching a spinning glass going on and on. Those performance-affecting server trips, page-load waits and just-too-many clicks were complained about for a long time. Something had to be done. A few new designs came in PeopleSoft 9.2 helping users to access their everyday work areas easier and faster. For example, Dashboard and Work Center aggregate most accessed information sections on a single page; Related Information allows users to complete transaction-related-research without interrupting a transaction and Secure Search gets users to a specific page directly. Today we’ll talk about the Actions menu. Most PeopleSoft pages are shared between individual products and product lines. It means changing the content on a single page involves Oracle development and quality assurance time for making and testing the changes. In order to streamline the navigation and cut down on accessing PeopleSoft pages one-page-at-a-time, we introduced a new menu design. The new menu allows accessing shared pages without the Oracle development team making any local changes, and it works as an additional one-click-path to specific high-traffic actionable pages. Let’s look at how many steps it took to Change Salary for an employee in HCM 9.1 before: Figure 1. BEFORE: The 6 steps a user would take to Change Salary in PeopleSoft HCM 9.1 In PeopleSoft 9.1 it took 5 steps + page loading time + additional verification time for making sure a correct employee is selected from the table. In PeopleSoft 9.2 it only takes 2 steps. To complete Ad Hoc Change Salary action, the user can start from the HCM Manager's Dashboard, click the Action menu within a table, choose a menu option, and access a correct employee’s details page to take an action. Figure 2. AFTER: The 2 steps a user would take to Change Salary in PeopleSoft HCM 9.2 The new menu is placed on a row level which ensures the user accesses the correct employee’s details page. The Actions menu separates menu options into hierarchical sections which help to scan and access the correct option quickly. The new menu’s small size and its structure enabled users to access high-traffic pages from any page and from any part of the page. No more spinning hourglass, no more multiple pages upload. The flexible design fits anywhere on a page and provides a fast and reliable path to the correct destination within the product. Now users can: Access any target page no matter how far it is buried from the starting point; Reduce navigation and page-load time; Improve productivity and reduce errors. The new menu design is available and widely used in all PeopleSoft 9.2 product lines.

    Read the article

  • Ubuntu One has high CPU usage but no syncing

    - by Peter
    over the weekend I updated my computer to Windows 8. So far Ubuntu One was running smoothly, but ever since the update (clean, new install) Ubuntu doesn't sync any more. In Windows 7 it would start to sync at full internet speed as soon as I drop a file. But now in Windows 8, as soon as I drop a new file into the Ubuntu One folder, CPU usage goes up to about 50 % and no network traffic occurs. This stays like that for a couple of minutes, CPU usage goes back to normal and then the client says that all is in sync - which isn't true. Is it too early for Windows 8? Do others have the same problem or is there something I can do about it? I try a couple of different things, and realized that if the file size is 20 MB the files get uploaded. The original file was 1.5 GB. I also didn't work with 200, 100 and 50 MB large files. But even with 20 MB large files, the upload is very slow and not steady. The log give plenty of this error: - twisted - ERROR - Failure: exceptions.TypeError About which I don't know the meaning. By the way, the account works just fine on the Ubuntu 12.04 partition. Any help is greatly appreciated. -Peter

    Read the article

  • Virtualbox fullscreen mode problem

    - by AtharvaI
    hi all I have a slightly weird issue with virtualbox. My host OS is Win7 (64-bit) and guest OS is Ubuntu 10.10(64-bit). When I switch to fullscreen mode in virtualbox, ubuntu display resizes to fit my screen size. however, after that the display is not updated. So if i click a menu or something I don't see it appear. but it seems to work in the background. if i click a menu in fullscreen mode, i don't see anything happen, but if then switch to windowed mode I see the menu already open. I have installed virtuabox guest additions. if any has a similar issue or has found a solution please let me know thanks.

    Read the article

  • Help identify the pattern for reacting on updates

    - by Mike
    There's an entity that gets updated from external sources. Update events are at random intervals. And the entity has to be processed once updated. Multiple updates may be multiplexed. In other words there's a need for the most current state of entity to be processed. There's a point of no-return during processing where the current state (and the state is consistent i.e. no partial update is made) of entity is saved somewhere else and processing goes on independently of any arriving updates. Every consequent set of updates has to trigger processing i.e. system should not forget about updates. And for each entity there should be no more than one running processing (before the point of no-return) i.e. the entity state should not be processed more than once. So what I'm looking for is a pattern to cancel current processing before the point of no return or abandon processing results if an update arrives. The main challenge is to minimize race conditions and maintain integrity. The entity sits mainly in database with some files on disk. And the system is in .NET with web-services and message queues. What comes to my mind is a database queue-like table. An arriving update inserts row in that table and the processing is launched. The processing gathers necessary data before the point of no-return and once it reaches this barrier it looks into the queue table and checks whether there're more recent updates for the entity. If there are new updates the processing simply shuts down and its data is discarded. Otherwise the processing data is persisted and it goes beyond the point of no-return. Though it looks like a solution to me it is not quite elegant and I believe this scenario may be supported by some sort of middleware. If I would use message queues for this then there's a need to access the queue API in the point of no-return to check for the existence of new messages. And this approach also lacks elegance. Is there a name for this pattern and an existing solution?

    Read the article

  • My WD passport detected but not shown on my computer

    - by Rakesh Patel
    1TB WD harddisk is detected in laptop but drives not visible on my computer. Under disk managment drives shown without letters. Right clicking on these drives give 2 options: Delete volume Help. The same harddisk was running well on the same laptop a day before. It is also showing full size of hard drives, Healthy (Primary Partition) etc. (this happened after harddisk usage by guest). Harddisk is new only (about 5 month usage) Please suggest: Is there hardware related problem inside harddisk, if so is it repairable? Is there any option to recover data?

    Read the article

  • Problems syncing photos and strange effects of uploaded files from other devices

    - by Daniel
    I have a Galaxy Spica (GT-i5700) Android v2.1, rooted with Leshak dev 7 #123. But never mind the root info, the problem would be the same unrooted. The photos from this phone is stored in "sdcard/images", nevertheless the phone also creates a "sdcard/DCIM" but only stores some thumbnails there. Problem nr 1: U1 only reads the DCIM-folder for automatic photo-upload. So photos stored in this phone is not uploaded. If I move photos to "DCIM" folder, U1 recognises the photos and start uploading them. Possible solution: Could there be an option in the settings, to set preferred photo folder? Problem nr 2: Out of 74 pictures, 12 did not get uploaded. Pressing "Retry failed transfers" in Settings does nothing. Pressing the files where status is "Upload failed, tap to retry" only changes the status to "Uploading..." but nothing gets uploaded. If I upload another file to U1, it is uploaded directly without any problem. It has nothing to do with file size, 1,1 MB files has been uploaded fine whilst some failed are 0,8 MB. Problem nr 3: The photos from DCIM are in my case uploaded to a folder called "Pictures - GT-I5700" in U1. If I log in to the homepage and from there upload another photo in "Pictures - GT-I5700", it shows up in U1 on my phone fine. But when I tap it, U1 downloads the photo to "sdcard/U1/Pictures - GT-I5700". If it sync photos from "sdcard/DCIM" to a specific folder, why not also download files to the same folder from which it is synced? After a while of usage, syncing and uploading files from different clients it would be a mishmash of folders and places files are stored and considering that I see no use of U1 at all. Another question: If my SD card in some way breaks down/some folders cannot be read/card temporarly changed and U1 is running, does U1 consider that as files deleted and also delete from the cloud?

    Read the article

  • How to manage many mobile device users at server side?

    - by Rami
    I built a social Android application in which users can see other users around them by GPS location. At the beginning thing went well as I had low number of users, but now that I have increasing number of users (about 1500 +100 every day) it has revealed a major problem in my design. In my Google App Engine servlet I have static HashMap that holds all the users profiles objects, currently 1500 and this number will increase as more users register. Why I'm doing it? Every user that requests for the users around him compares his GPS with other users and checks if they are in his 10km radius. This happens every five minutes on average. Consequently, I can't get the users from db every time because GAE read/write operation quota will tear me apart. The problem with this design is? As the number of users increases, the Hashmap turns to null every 4-6 hours, I think that this time is getting shorter, but I'm not sure. I'm fixing this by reloading the users from the db every time I detect that it becomes null, but this causes DOS to my users for 30 sec, so I'm looking for better solution. I'm guessing that it happens because the size of the hashmap. Am I right? I have been advised to use a spatial database, but that means that I can't work with GAE any more and it means that I need to build my big server all over again and lose my existing DB. Is there something I can do with the existing tools? Thanks.

    Read the article

  • Nvidia 9 series or Intel HD 2000? [closed]

    - by EApubs
    I just tested an Nvidia 9300 GS card with a Intel Corei3 HD 2000 graphics system. Here is the windows experience index scores I got : Nvidia 9300 GS : Base Score 3.9 Processor : 7.1 Memory : 7.5 Graphics : 3.9 Gaming Graphics : 5.1 Hard Disk : 5.9 Intel HD 2000 : Base Score : 5.2 Processor : 7.1 Memory : 5.9 Graphics : 5.2 Gaming Graphics : 5.8 Hard Disk : 5.9 My questions are : When using Intel HD graphics, it reduces the score of my Ram! How is that possible? It checks the speed of the ram. Not the size (i think). Intel graphics take some of the ram space but how can that effect the speed? From both of them, what will be the good choice?

    Read the article

  • linux: accessing thousands of files in hash of directories

    - by 130490868091234
    I would like to know what is the most efficient way of concurrently accessing thousands of files of a similar size in a modern Linux cluster of computers. I am carrying an indexing operation in each of these files, so the 4 index files, about 5-10x smaller than the data file, are produced next to the file to index. Right now I am using a hierarchy of directories from ./00/00/00 to ./99/99/99 and I place 1 file at the end of each directory, like ./00/00/00/file000000.ext to ./00/00/00/file999999.ext. It seems to work better than having thousands of files in the same directory but I would like to know if there is a better way of laying out the files to improve access.

    Read the article

  • subversion instillation on centos 5.8

    - by user57221
    I am trying to install subversion on centos 5.8 usingyum install subversion and it is throwing the error below. ..... .... Total size: 7.3 M Is this ok [y/N]: y Downloading Packages: Running rpm_check_debug ERROR with rpm_check_debug vs depsolve: libapr-1.so.0()(64bit) is needed by subversion-1.6.11-10.el5_8.x86_64 libaprutil-1.so.0()(64bit) is needed by subversion-1.6.11-10.el5_8.x86_64 libapr-1.so.0()(64bit) is needed by (installed) mod_perl-2.0.4-6.el5.x86_64 apr is needed by (installed) httpd-2.2.22-12051516.x86_64 /usr/lib64/libapr-1.so.0 is needed by (installed) httpd-2.2.22-12051516.x86_64 libaprutil-1.so.0()(64bit) is needed by (installed) mod_perl-2.0.4-6.el5.x86_64 apr-util is needed by (installed) httpd-2.2.22-12051516.x86_64 /usr/lib64/libaprutil-1.so.0 is needed by (installed) httpd-2.2.22-12051516.x86_64 Complete! (1, [u'Please report this error in http://bugs.centos.org/yum5bug']) How do i resolve this?

    Read the article

  • How much RAM 64bit Windows 8 reserves to OS internal use?

    - by Barleyman
    Windows reserves some memory for it's internal use which is not normally allocated to applications. This reserve is seen most easily if you run without a page file or limit the pagefile to relatively small size (such as 3GB). Windows will allocate primarily RAM up to the limit, fill up remaining free space in the page file (if any) and issue a low memory warning when there is no page file space left and the allocated RAM limit is exceeded. The limit appears to be a percentage of the total system RAM. Windows 7 x64 limit is discussed here and methods for circumventing the "low memory warning" is discussed here. Disabling the low memory warning has some advantages - You can use some 600MB more RAM on 8GB machine) But there is a serious disadvantage - When you're out of ram, programs will crash. How much RAM can you allocate on 8GB Windows 8 x64 before you get the low memory warning? Is it possible to adjust the warning threshold?

    Read the article

  • Asterisk failing at startup after upgrading to asterisk18

    - by Supratik
    I was using asterisk16 and asterisk16-skypeforasterisk, which was working fine. I have recently upgraded to asterisk18 and asterisk18-skypeforasterisk, after that I am receiving the following error message. Asterisk ended with exit status 1 Asterisk died with code 1. Asterisk could not start! Use 'tail /var/log/asterisk/full' to find out why. When I checked the log I got the following messages. codec_g729a.c: == Found total of 11 G.729 licenses translate.c: empty buf size, you need to supply one Now, if I remove the /var/lib/asterisk/licenses folder it works fine. Can you please tell me what could be the issue here ? Warm Regards Supratik

    Read the article

  • Adobe Illustrator - Objects are Expanded on Restart

    - by Shatou Dev
    I use Adobe Illustrator extensively for mobile development. For example, for Android app, I have many illustrations with different densities and sizes. I use many effects such as outer glow, round corners and such. However, sometimes, when I close the .ai file and re-open it later, I found all my objects expanded. Effects are expanded. For example, the round corner effect is now expanded and I cannot re-size any object. I am using a MacBook Pro. I am suspecting both Time Machine and Dropbox. As much as I can recall, it also appears with CS6 version, not CS5.

    Read the article

  • Can you shrink the sparse disk image of a Mac OS X guest OS in VMWare Fusion?

    - by Paul D. Waite
    I use VMWare Fusion on my Mac to run a virtual Windows 7 machine, and the Microsoft IE compatibility Windows XP virtual machines. In VMWare Tools on the Windows guest OSes, there’s a “Shrink” option that lets you reduce the size of the sparse disk image used by the guest OS, to save hard drive space on your host OX. I’ve recently created another virtual machine, this time running Snow Leopard Server. I was wondering if I could shrink the spare disk image used by this machine too, but I can’t find a VMWare Tools app on the Mac guest OS, even though VMWare Tools have been installed (as VMWare’s Shared Folders feature is working). Is there any way to shrink the sparse disk image used by Mac OS X guest OSes in VMWare Fusion?

    Read the article

  • uploading large files (mp4) to IIS 7.5 gives 500 Internal Server Error

    - by dragon112
    I made a website on which i need to be able to upload video files and it has worked for quite a while. However after a while it just stopped working and now it will give me the following IIS error message when i upload a video. Images do work (possibly due to their smaller size). I use an html form with PHP server sided script to upload. I have already set the user permissions for the entire inetpub to allow all actions for the IIS user. If you have any idea what it could be PLEASE tell me, have been trying to fix this for weeks now. Thanks in advance!

    Read the article

  • How SSD hard drive affected speed of your website (asp.net/linq/ms sql database)

    - by Sergey Osypchuk
    I have a small database (<1G) But we have a lot of complex logi? in website and client complains on render time, which is 3-5 seconds. We are not google, and thousands of users a day is our dream, so size is not a problem, but speed is important. Can anybody share with experience with SSD drives for ASP.NET (MVC)/LINQ/MS SQL based application ? How you performance increased? UPDATE: this whitepaper states that it will be 20 times faster. http://www.texmemsys.com/files/f000174.pdf

    Read the article

  • Create a custom shortcut that types clipboard contents

    - by briankb
    I want to paste my clipboard contents to a remote session such as VNC, IPMI, or Raritan. To accomplish this, I installed xdotool and clip. Then I wrote a simple command that types the clipboard contents: xdotool type "$(xclip -o)" This works if I stay in a terminal window, and type that command myself. It types back my clipboard contents when I run the command. Of course now I want to make this into a hotkey that works in any window. However, if I create a custom shortcut using Keyboard settings, it doesn't work. If I assign a hotkey Alt+K to the shortcut, nothing happens when I press it. If I use Ctrl+K, unexpected behavior occurs to whatever window has focus. e.g. my terminal window size shrinks (it's somewhat amusing, actually). Similar results occur if I save it as a script and call the script, or if I encapsulate the command with sh -c. How can I make practical use of the powerful xdotool type command?

    Read the article

< Previous Page | 729 730 731 732 733 734 735 736 737 738 739 740  | Next Page >