Search Results

Search found 68828 results on 2754 pages for 'knapsack problem'.

Page 571/2754 | < Previous Page | 567 568 569 570 571 572 573 574 575 576 577 578  | Next Page >

  • best way to enlarge system partition

    - by yuvi
    I have a problem - I need to enlarge my system partition. I mean - when I initially installed Ubuntu, I split the partition so I have 15GB for system and the rest (around 400) pointed at /home/. This is very useful if anything goes wrong someday and I want to format and completely re-install Ubuntu without losing any of my actual data. The problem is, 15GB isn't enough, so it seems. I already moved /var/ and /opt/ folder to /home/, adding symlinks at root, but I'm still at 86% usage and I'm having performance issues (mostly when booting or running a VM). I can use Ubuntu on a flash drive and externally enlarge the partition, but I'm really afraid with going forward with that plan. Also, despite what I said before, I'd like to avoid re-installing the system if at all possible. Any advice, suggestions or ideas on how to best approach this? Any warnings I should heed? Thanks in advance! update Here's the gparted screenshot - as you can see, there's windows on dual boot (sda1-5 are all related to the windows system), then I have a linux swap, 14GB (so uh... not even 15) of system and 435 of for /home.

    Read the article

  • Solving Big Problems with Oracle R Enterprise, Part I

    - by dbayard
    Abstract: This blog post will show how we used Oracle R Enterprise to tackle a customer’s big calculation problem across a big data set. Overview: Databases are great for managing large amounts of data in a central place with rigorous enterprise-level controls.  R is great for doing advanced computations.  Sometimes you need to do advanced computations on large amounts of data, subject to rigorous enterprise-level concerns.  This blog post shows how Oracle R Enterprise enables R plus the Oracle Database enabled us to do some pretty sophisticated calculations across 1 million accounts (each with many detailed records) in minutes. The problem: A financial services customer of mine has a need to calculate the historical internal rate of return (IRR) for its customers’ portfolios.  This information is needed for customer statements and the online web application.  In the past, they had solved this with a home-grown application that pulled trade and account data out of their data warehouse and ran the calculations.  But this home-grown application was not able to do this fast enough, plus it was a challenge for them to write and maintain the code that did the IRR calculation. IRR – a problem that R is good at solving: Internal Rate of Return is an interesting calculation in that in most real-world scenarios it is impractical to calculate exactly.  Rather, IRR is a calculation where approximation techniques need to be used.  In this blog post, we will discuss calculating the “money weighted rate of return” but in the actual customer proof of concept we used R to calculate both money weighted rate of returns and time weighted rate of returns.  You can learn more about the money weighted rate of returns here: http://www.wikinvest.com/wiki/Money-weighted_return First Steps- Calculating IRR in R We will start with calculating the IRR in standalone/desktop R.  In our second post, we will show how to take this desktop R function, deploy it to an Oracle Database, and make it work at real-world scale.  The first step we did was to get some sample data.  For a historical IRR calculation, you have a balances and cash flows.  In our case, the customer provided us with several accounts worth of sample data in Microsoft Excel.      The above figure shows part of the spreadsheet of sample data.  The data provides balances and cash flows for a sample account (BMV=beginning market value. FLOW=cash flow in/out of account. EMV=ending market value). Once we had the sample spreadsheet, the next step we did was to read the Excel data into R.  This is something that R does well.  R offers multiple ways to work with spreadsheet data.  For instance, one could save the spreadsheet as a .csv file.  In our case, the customer provided a spreadsheet file containing multiple sheets where each sheet provided data for a different sample account.  To handle this easily, we took advantage of the RODBC package which allowed us to read the Excel data sheet-by-sheet without having to create individual .csv files.  We wrote ourselves a little helper function called getsheet() around the RODBC package.  Then we loaded all of the sample accounts into a data.frame called SimpleMWRRData. Writing the IRR function At this point, it was time to write the money weighted rate of return (MWRR) function itself.  The definition of MWRR is easily found on the internet or if you are old school you can look in an investment performance text book.  In the customer proof, we based our calculations off the ones defined in the The Handbook of Investment Performance: A User’s Guide by David Spaulding since this is the reference book used by the customer.  (One of the nice things we found during the course of this proof-of-concept is that by using R to write our IRR functions we could easily incorporate the specific variations and business rules of the customer into the calculation.) The key thing with calculating IRR is the need to solve a complex equation with a numerical approximation technique.  For IRR, you need to find the value of the rate of return (r) that sets the Net Present Value of all the flows in and out of the account to zero.  With R, we solve this by defining our NPV function: where bmv is the beginning market value, cf is a vector of cash flows, t is a vector of time (relative to the beginning), emv is the ending market value, and tend is the ending time. Since solving for r is a one-dimensional optimization problem, we decided to take advantage of R’s optimize method (http://stat.ethz.ch/R-manual/R-patched/library/stats/html/optimize.html). The optimize method can be used to find a minimum or maximum; to find the value of r where our npv function is closest to zero, we wrapped our npv function inside the abs function and asked optimize to find the minimum.  Here is an example of using optimize: where low and high are scalars that indicate the range to search for an answer.   To test this out, we need to set values for bmv, cf, t, emv, tend, low, and high.  We will set low and high to some reasonable defaults. For example, this account had a negative 2.2% money weighted rate of return. Enhancing and Packaging the IRR function With numerical approximation methods like optimize, sometimes you will not be able to find an answer with your initial set of inputs.  To account for this, our approach was to first try to find an answer for r within a narrow range, then if we did not find an answer, try calling optimize() again with a broader range.  See the R help page on optimize()  for more details about the search range and its algorithm. At this point, we can now write a simplified version of our MWRR function.  (Our real-world version is  more sophisticated in that it calculates rate of returns for 5 different time periods [since inception, last quarter, year-to-date, last year, year before last year] in a single invocation.  In our actual customer proof, we also defined time-weighted rate of return calculations.  The beauty of R is that it was very easy to add these enhancements and additional calculations to our IRR package.)To simplify code deployment, we then created a new package of our IRR functions and sample data.  For this blog post, we only need to include our SimpleMWRR function and our SimpleMWRRData sample data.  We created the shell of the package by calling: To turn this package skeleton into something usable, at a minimum you need to edit the SimpleMWRR.Rd and SimpleMWRRData.Rd files in the \man subdirectory.  In those files, you need to at least provide a value for the “title” section. Once that is done, you can change directory to the IRR directory and type at the command-line: The myIRR package for this blog post (which has both SimpleMWRR source and SimpleMWRRData sample data) is downloadable from here: myIRR package Testing the myIRR package Here is an example of testing our IRR function once it was converted to an installable package: Calculating IRR for All the Accounts So far, we have shown how to calculate IRR for a single account.  The real-world issue is how do you calculate IRR for all of the accounts?This is the kind of situation where we can leverage the “Split-Apply-Combine” approach (see http://www.cscs.umich.edu/~crshalizi/weblog/815.html).  Given that our sample data can fit in memory, one easy approach is to use R’s “by” function.  (Other approaches to Split-Apply-Combine such as plyr can also be used.  See http://4dpiecharts.com/2011/12/16/a-quick-primer-on-split-apply-combine-problems/). Here is an example showing the use of “by” to calculate the money weighted rate of return for each account in our sample data set.  Recap and Next Steps At this point, you’ve seen the power of R being used to calculate IRR.  There were several good things: R could easily work with the spreadsheets of sample data we were given R’s optimize() function provided a nice way to solve for IRR- it was both fast and allowed us to avoid having to code our own iterative approximation algorithm R was a convenient language to express the customer-specific variations, business-rules, and exceptions that often occur in real-world calculations- these could be easily added to our IRR functions The Split-Apply-Combine technique can be used to perform calculations of IRR for multiple accounts at once. However, there are several challenges yet to be conquered at this point in our story: The actual data that needs to be used lives in a database, not in a spreadsheet The actual data is much, much bigger- too big to fit into the normal R memory space and too big to want to move across the network The overall process needs to run fast- much faster than a single processor The actual data needs to be kept secured- another reason to not want to move it from the database and across the network And the process of calculating the IRR needs to be integrated together with other database ETL activities, so that IRR’s can be calculated as part of the data warehouse refresh processes In our next blog post in this series, we will show you how Oracle R Enterprise solved these challenges.

    Read the article

  • How should developers handle subpar working conditions? [closed]

    - by ivar
    I have been working in my current job for less than a year and at the beginning didn't have the courage to say anything about the things that bothered me. Now I'm a bit fed up and need things to get better. The first problem is not random but I'll mention it anyway. We are running out of space so every new employee gets a smaller table. We are promised that the space problem will be fixed soon. Almost every employee has a different keyboard, mouse, headphones (if any). Mine are $10 keyboard, some random cheap mouse and some random crappy headphones with a mic. All these were used and dirty when I got them. The number of monitor is 1-3 and with different sizes. I have 2 nice monitors and can't complain but some are given 1 small monitor. When it's their first job they don't have the guts to ask for 2 even if most others have 2. Nobody seems to care too. Project manager asked if it's ok? He obviously said he can handle the 1 small one. Then the manager said you can go ask for 1 more. I'm watching this and think go and ask where? The company is trying to hire more people but is not doing much after the person has signed the contract. We are put in one room that is open to the hallway and it's super noisy. Almost like a zoo at times. Even if nobody is talking the crappy keyboards make too much noise. Is this normal? Am I too negative and should I just do my job with what I was given? Should I demand better things? Should the company have some system that everybody gets things in some price range?

    Read the article

  • Why is my Ubuntu 10.10 CD not booting?

    - by Tom Brito
    I have downloaded Ubuntu 10.10 and burned the ISO but it will not boot. I discarded problems with the ISO, as I've downloaded from the official website with no errors, and burned it with no errors. I discarded problems with the burning, as looks like it was recorded with no errors here and later in another computer. I discarded problems with my DVD reader as other cds boots fine. I'm currently using Ubuntu 9.10, I know I can upgrade via internet, but I have this same problem with my Windows XP cd, so I really would like to discover what's going on here.. My Ubuntu 9.10 cd boots just right, but the new one not. What else can be? Or what more precise tests can I make to discover where's the problem? --More info What happens when I try to boot with the Ubuntu 10.10 cd is that it behavior like there's no bootable cd in the drive. It just don't find the boot on the cd, and start the HD system. My notebook is an Amazon PC Intel Celeron 1.5 with 2Gb memory, a DVD-RW driver, HD samsung with 260GB.

    Read the article

  • Samba issue with sharing directories on NTFS/FAT32

    - by Microkernel
    I have some strange problems with Samba server. I am using samba Version 3.5.4 on Ubuntu 10.10. I have two Windows XP machines, one on VirtualBox on Ubuntu and another office laptop. Windows machine on VirtualBox has no issues in accessing the shared folders, but the laptop is not able to access all the shared content. The issue faced on laptop is the following. Shared folders on ext3 drives have no issues in accessing, but the contents shared on NTFS and FAT32 drives (mounted ones) are not accessible. When I try to open the shared folder, it asks for user name and password, but doesn't accept when I provide it. (Even if I provide admin login details). I changed workgroup value to the domain_name in office laptop, but still the problem persists. Here is the smdb.conf I am using: [global] workgroup = XXX.XXX.ORG server string = %h server (Samba, Ubuntu) map to guest = Bad User obey pam restrictions = Yes pam password change = Yes passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . unix password sync = Yes syslog = 0 log file = /var/log/samba/log.%m max log size = 1000 dns proxy = No usershare allow guests = Yes panic action = /usr/share/samba/panic-action %d guest ok = Yes [homes] comment = Home Directories [printers] comment = All Printers path = /var/spool/samba read only = No create mask = 0700 printable = Yes browseable = No [print$] comment = Samba server's CD-ROM path = /cdrom force user = nobody force group = nobody locking = No Workgroup was defined as "HOMENET" before, changed it to domain name on the office laptop thinking it was the problem, but for no avail.

    Read the article

  • bluetooth headset can connect, but not visible in pulse audio

    - by Kim Marivoet
    I have a plantronics bluetooth headset, and until yesterday I could use it without any problem. However, today it suddenly stopped working (maybe related to the last software update I did). I can still connect/disconnect my headset, but it doesn't show up in pulse audio anymore. I read through various posts that describes kind of the same problem, but none of the suggested solutions worked. I get following error in the syslog: Oct 13 16:49:57 desktop bluetoothd[1040]: Endpoint registered: sender=:1.34 path=/MediaEndpoint/HFPAG Oct 13 16:49:57 desktop bluetoothd[1040]: Endpoint registered: sender=:1.34 path=/MediaEndpoint/A2DPSource Oct 13 16:49:57 desktop bluetoothd[1040]: Endpoint registered: sender=:1.34 path=/MediaEndpoint/A2DPSink Oct 13 16:50:09 desktop kernel: [ 17.340943] input: 48:C1:AC:08:FE:8F as /devices/virtual/input/input14 Oct 13 16:50:09 desktop bluetoothd[1040]: /org/bluez/1040/hci0/dev_48_C1_AC_08_FE_8F/fd0: fd(36) ready Oct 13 16:50:09 desktop rtkit-daemon[1894]: Successfully made thread 2213 of process 1892 (n/a) owned by '1000' RT at priority 5. Oct 13 16:50:09 desktop rtkit-daemon[1894]: Supervising 5 threads of 1 processes of 1 users. Oct 13 16:50:10 desktop bluetoothd[1040]: Badly formated or unrecognized command: AT+XEVENT=USER-AGENT,COM.PLANTRONICS,PLT_VOYAGERPRO,0109,27.90,FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF Oct 13 16:50:10 desktop bluetoothd[1040]: Audio connection got disconnected Any help would be much appreciated. I'm using Ubuntu 12.04. Thanks, Kim

    Read the article

  • mpd conflicting with other applications -- taking control of pulse?

    - by Jamie Schembri
    Simple explanation If mpd is playing and sound attempts to play through another application, x, sound from x will not be output. If sound from another application, x, is playing and mpd then attempts to play, no sound will be output from mpd whilst sound from x continues to play. Details I first noticed this problem with Flash, and this continues to be the most common scenario. I posted a question about this before realising it was not strictly Flash-related, but instead is something to do with mpd. My biggest frustration comes from trying to get mpd working again, as I can't seem to pin down any method. Sometimes pulseaudio -k seems to help, other times sudo /etc/init.d/mpd restart, others killing Chromium (due to Flash) with SIGTERM. Most of the time it's a combination of the above. I think this might be because I run mpd as another user and use pulseaudio. It is not run as root or current user. Also, mpd is compiled with pulse support. I have tried numerous things, however I honestly couldn't recite what, as it has been some time since. I'd rather not go poking around without some direction, but I'd be really happy to fix this problem once and for all. mpd.conf Simplified by removing comments/blank lines. music_directory "/var/lib/mpd/music" playlist_directory "/var/lib/mpd/playlists" db_file "/var/lib/mpd/tag_cache" log_file "/var/log/mpd/mpd.log" pid_file "/var/run/mpd/pid" state_file "/var/lib/mpd/state" user "mpd" bind_to_address "wilson" input { plugin "curl" } audio_output { type "pulse" name "My Pulse Output" } filesystem_charset "UTF-8" id3v1_encoding "UTF-8" Question For the sake of keeping this a question: does anyone know what is causing this, or how to fix it?

    Read the article

  • nvidia segfault crashes XServer (12.04 fresh install)

    - by Sébastien GILLES
    I am on a fresh install of 12.04, and my video card is a Quadro FX 570, which appears to be supported (see output of unity_support_test below). But my Xserver crashes randomly and causes my session to just shutdown and bring me back to the logging screen. This happens several times per hour. After checking syslog it that a segfault in the nvidia driver is the cause of the problem: Jun 13 11:14:17 lima kernel: [86569.828982] Xorg[17940]: segfault at ffebeb69 ip b4c8f7aa sp bf8076dc error 4 in nvidia_drv.so[b47a7000+706000] Jun 13 11:14:36 lima gnome-session[18119]: Gdk-WARNING: gnome-session: Fatal IO error 11 (Ressource temporairement non disponible) on X server :3.#012 The funny thing is that as I was reporting this bug, my xserver crashed again (!) and this time I got another error message: Jun 13 11:29:39 lima kernel: [87491.106441] NVRM: Xid (0000:02:00): 6, PE0001 Jun 13 11:29:39 lima gnome-session[26493]: Gdk-WARNING: gnome-session: Fatal IO error 11 (Ressource temporairement non disponible) on X server :4.#012 Any idea as to what can be the problem given that i am on a fresh install with a supported video card ? Sébastien == Output of unity_support_test $ /usr/lib/nux/unity_support_test -p OpenGL vendor string: NVIDIA Corporation OpenGL renderer string: Quadro FX 570/PCIe/SSE2 OpenGL version string: 3.3.0 NVIDIA 295.40 Not software rendered: yes Not blacklisted: yes GLX fbconfig: yes GLX texture from pixmap: yes GL npot or rect textures: yes GL vertex program: yes GL fragment program: yes GL vertex buffer object: yes GL framebuffer object: yes GL version is 1.4+: yes Unity 3D supported: yes

    Read the article

  • How to prevent multiple playing sounds from destroying your hearing?

    - by Rookie
    The problem is that when I play 100 sounds almost at same time, all I hear is noise. It's not very attractive to listen it for 30 minutes straight. I tried to fix this by allowing only 1 sound of each sound type to be played at once. But it still sounds really ugly; eventually my brain keeps hearing only the very end of the shot sounds (or the start of it?), and that gets on my nerves really quickly. Eventually I would just decide to turn off the sounds completely. So is there any point of using sounds in a game like this at all? How does our dear reality handle this problem? If there is a war out there, how does it sound when hundred of men shoot almost at the same times? Edit: Here is how the game sounds currently; there isn't even 100 sounds playing at once, maybe 20? http://www.speedyshare.com/VTBDw/headache.mp3 At the beginning it sounds OK, but then it becomes unbearable! In that audio clip there is allowed only 1 sound to be played at once, so it will stop the previous playing sound when new sound is played. Edit2: And here is same headache but 32 simultaneous sounds allowed to be played at same time: http://www.speedyshare.com/TuWAR/headache-worse.mp3 Quite a torture, eh?

    Read the article

  • Requesting quality analysis test cases up front of implementation/change

    - by arin
    Recently I have been assigned to work on a major requirement that falls between a change request and an improvement. The previous implementation was done (badly) by a senior developer that left the company and did so without leaving a trace of documentation. Here were my initial steps to approach this problem: Considering that the release date was fast approaching and there was no time for slip-ups, I initially asked if the requirement was a "must have". Since the requirement helped the product significantly in terms of usability, the answer was "If possible, yes". Knowing the wide-spread use and affects of this requirement, had it come to a point where the requirement could not be finished prior to release, I asked if it would be a viable option to thrash the current state and revert back to the state prior to the ex-senior implementation. The answer was "Most likely: no". Understanding that the requirement was coming from the higher management, and due to the complexity of it, I asked all usability test cases to be written prior to the implementation (by QA) and given to me, to aid me in the comprehension of this task. This was a big no-no for the folks at the management as they failed to understand this approach. Knowing that I had to insist on my request and the responsibility of this requirement, I insisted and have fallen out of favor with some of the folks, leaving me in a state of "baffledness". Basically, I was trying a test-driven approach to a high-risk, high-complexity and must-have requirement and trying to be safe rather than sorry. Is this approach wrong or have I approached it incorrectly? P.S.: The change request/improvement was cancelled and the implementation was reverted back to the prior state due to the complexity of the problem and lack of time. This only happened after a 2 hour long meeting with other seniors in order to convince the aforementioned folks.

    Read the article

  • Be aware of the difference between CURRENT_DATE and SYSDATE

    - by Kevin Smith
    I was running some queries in SQL Developer against the WebCenter Content (WCC) schema that included date fields such as dInDate. I was comparing the dates against CURRENT_DATE. I was not getting the expected results. I did some googlng and didn’t find a solution, but I did run across a reference to SYSDATE. I tried SYSDATE in my queries and got the expected results. I did a TO_CHAR on the two date fields and found they returned different times. CURRENT_DATE returned the time from my laptop which was  in the EDT time zone. SYSDATE returned the time from the database server which happened to be in the PDT time zone. I guess if both the database server and my laptop were in the same time zone I would not have seen any problem. Here is the query I ran to display the two fields. select to_char(current_date,'DD-MON-YY HH:MI:SS'), to_char(sysdate,'DD-MON-YY HH:MI:SS') from dual; As you can see from the screen shot from SQL Developer they definitely returned different times. I’m sure there is some command or setting you can use to prevent this problem, but for me the take away is to use SYSDATE in your queries when you want to do any date comparison.

    Read the article

  • Is Linq having a mind-numbing effect on .NET programmers?

    - by Aaronaught
    A lot of us started seeing this phenomenon with jQuery about a year ago when people started asking how to do absolutely insane things like retrieve the query string with jQuery. The difference between the library (jQuery) and the language (JavaScript) is apparently lost on many programmers, and results in a lot of inappropriate, convoluted code being written where it is not necessary. Maybe it's just my imagination, but I swear I'm starting to see an uptick in the number of questions where people are asking to do similarly insane things with Linq, like find ranges in a sorted array. I can't get over how thoroughly inappropriate the Linq extensions are for solving that problem, but more importantly the fact that the author just assumed that the ideal solution would involve Linq without actually thinking about it (as far as I can tell). It seems that we are repeating history, breeding a new generation of .NET programmers who can't tell the difference between the language (C#/VB.NET) and the library (Linq). What is responsible for this phenomenon? Is it just hype? Magpie tendencies? Has Linq picked up a reputation as a form of magic, where instead of actually writing code you just have to utter the right incantation? I'm hardly satisfied with those explanations but I can't really think of anything else. More importantly, is it really a problem, and if so, what's the best way to help enlighten these people?

    Read the article

  • Brother MFC-J470DW scan function "Check Connection"

    - by user292599
    I have a Brother MFC-J470DW printer that I have connected to a Linux desktop (running Ubuntu 14.04) using a wireless router network. The printer works fine for printing and copying, but now I want to add the scan function. To set up the scan function, I went to the Brother web page for this printer: http://support.brother.com/g/b/downloadlist.aspx?c=eu_ot&lang=en&prod=mfcj470dw_us_eu_as&os=128 and under Scanner Drivers selected "Scanner driver 64bit (deb package)", "Scan-key-tool 64bit (deb package)", and "Scanner Setting file (deb package)". For each package, I clicked the EULA, and selected "open with Ubuntu Software Center". Then after the USC window pops up, I click on Install and the red line goes from left to right. In each case, the USC window then had a green checkmark and the Install box changes to Reinstall (that's how you know it worked). So now I try it out. Hitting the Scan button on the printer, selecting "Scan to file", and hitting ok produces the message "Check Connection". I checked the Brother Linux Information FAQ (scanner) page and the 14th question seems the same as mine: When I try to use the scan key on my network connected machine, I receive the error "Check connection" or I can not select anything except "scan to FTP". I explored the solution given for this FAQ, but found from ifconfig that I am already using eth0, the default setting, so presumably that is not the problem. I also found brscan-skey installed in /usr/bin and did drrm@drrmlinux2:~$ brscan-skey -t drrm@drrmlinux2:~$ brscan-skey but that didn't help - I still get the "Check connection" message. What can you suggest to fix this problem?

    Read the article

  • Architecture- Tracking lead origin when data is submitted by a server

    - by Kevin
    I'm looking for some assistance in determining the least complex strategy for tracking leads on an affiliate's website. The idea is to make the affiliate's integration with my application as easy as possible. I've run into theoretical barriers, so i'm here to explore other options. Application Overview: This is a lead aggregation / distribution platform. We will be focusing on the affiliate portion of this website. Essentially affiliates sign up, enter in marketing campaigns and sell us their conversions. Problem to be solved: We want to track a lead's origin and other events on the affiliate site. We want to know what pages, ads, and forms they viewed before they converted. This can easily be solved with pixel tracking. Very straightforward. Theoretical Issues: I thought I would ask affiliates to place the pixel where I could log impressions and set a third party cookie when the pixel is first called. Then I could associate future impressions with this cookie. The problem is that when the visitor converts on the affiliate's site and I receive their information via HTTP POST from the Affiliate's server I wouldn't be able to access the cookie and associate it with the lead record unless the lead lands on my processor via a redirect and is then redirected back to the affiliate's landing page. I don't want to force the affiliates to submit their forms directly to my tracking site, so allowing them to make an HTTP POST from their server side form processor would be ideal. I've considered writing JavaScript to set a First Party cookie but this seems to make things more complicated for the affiliate. I also considered having the affiliate submit the lead's data via a conversion pixel. This seems to be the most ideal scenario so far as almost all pixels are as easy as copy/paste. The only complication comes from the conversion pixel- which would submit all of the lead information and the request would come from the visitor's machine so I could access my third party cookie.

    Read the article

  • Gotcha when using JavaScript in ADF Regions

    - by Frank Nimphius
    You use the ADF Faces af:resource tag to add or reference JavaScript on a page. However, adding the af:resource tag to a page fragment my not produce the desired result if the script is added as shown below <?xml version='1.0' encoding='UTF-8'?> <jsp:root xmlns:jsp="http://java.sun.com/JSP/Page" version="2.1" xmlns:af="http://xmlns.oracle.com/adf/faces/rich"> <af:resource type="javascript">   function yourMethod(evt){ ... } </af:resource> Adding scripts to a page fragment like this will see the script added for the first page fragment loaded by an ADF region but not for any subsequent fragment navigated to within the context of task flow navigation. The cause of this problem is caching as the af:resource tag is a JSP element and not a lazy loaded JSF component, which makes it a candidate for caching. To solve the problem, move the af:resource tag into a container component like af:panelFormLayout so the script is added when the component is instantiated and added to the page.  <?xml version='1.0' encoding='UTF-8'?> <jsp:root xmlns:jsp="http://java.sun.com/JSP/Page" version="2.1" xmlns:af="http://xmlns.oracle.com/adf/faces/rich"> <af:panelFormLayout> <af:resource type="javascript">   function yourMethod(evt){ ... } </af:resource> </af:panelFormLayout> Magically this then works and prevents browser caching of the script when using page fragments.

    Read the article

  • What methods should save/load a game state

    - by vedi
    There are a lot of articles about how to save a state of a game and they are pretty good. But I have one conceptual misunderstanding where should I save the state? My game has number of screens and pair of them are MainMenuScreen and MainSceneScreen these are inherited from Screen class. MainMenuScreen is shown at start of the game the MainSceneScreen little later. What is the problem? I navigated to MainSceneScreen, forced Android to stop the application (I change a language settings on the device to achieve it, please let me know if I'm wrong). After that I select the application again and I can see MainMenuScreen is shown. But I want MainSceneScreen to be shown. I suppose I should override resume method. But what class I should override? I have class PsGame that extends Game class of libgdx. I put breakpoints to its resume method and it turned out that method was not called. I investigated the problem and I've found little strange code in onResume method of AndroidApplication class of libgdx: if (!firstResume) graphics.resume(); else firstResume = false; My debugger said firstResume was true and didn't go to *graphics.resume()*line. Sorry for a lot of words but could you answer following question: What did I do wrong? What methods should I override? Thank you in advance.

    Read the article

  • dell vostro 1000 broadcom wireless connection

    - by lorrenuy
    I have a problem with the hardware broadcom wifi. I press the hotkey fn+f2 to activate the hardware and this will not work. I'll look at the drivers but it appears to be installed. How can I solve this problem? Ubuntu is all new to me so if possible, give me a clear explanation. Now do I connect the lan cable. I use the Ubuntu 11.10 lawrence@lawrence-Vostro-1000:~$ sudo lshw -class network [sudo] password for lawrence: PCI (sysfs) *-network description: Network controller product: BCM4311 802.11b/g WLAN vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:05:00.0 version: 01 width: 32 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list configuration: driver=b43-pci-bridge latency=0 resources: irq:18 memory:c0200000-c0203fff *-network description: Ethernet interface product: BCM4401-B0 100Base-TX vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:08:00.0 logical name: eth1 version: 02 serial: 00:1c:23:a2:b9:a9 size: 100Mbit/s capacity: 100Mbit/s width: 32 bits clock: 33MHz capabilities: pm bus_master cap_list ethernet physical mii 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=b44 driverversion=2.0 duplex=full ip=192.168.1.18 latency=64 link=yes multicast=yes port=twisted pair speed=100Mbit/s resources: irq:21 memory:c0300000-c0301fff lawrence@lawrence-Vostro-1000:~$ lawrence@lawrence-Vostro-1000:~$ rfkill list all 0: dell-wifi: Wireless LAN Soft blocked: yes Hard blocked: yes

    Read the article

  • Alternate method to dependent, nested if statements to check multiple states

    - by octopusgrabbus
    Is there an easier way to process multiple true/false states than using nested if statements? I think there is, and it would be to create a sequence of states, and then use a function like when to determine if all states were true, and drop out if not. I am asking the question to make sure there is not a preferred Clojure way to do this. Here is the background of my problem: I have an application that depends on quite a few input files. The application depends on .csv data reports; column headers for each report (.csv files also), so each sequence in the sequence of sequences can be zipped together with its columns for the purposes of creating a smaller sequence; and column files for output data. I use the following functions to find out if a file is present: (defn kind [filename] (let [f (File. filename)] (cond (.isFile f) "file" (.isDirectory f) "directory" (.exists f) "other" :else "(cannot be found)" ))) (defn look-for [filename expected-type] (let [find-status (kind-stat filename expected-type)] find-status)) And here are the first few lines of a multiple if which looks ugly and is hard to maintain: (defn extract-re-values "Plain old-fashioned sub-routine to process real-estate values / 3rd Q re bills extract." [opts] (if (= (utl/look-for (:ifm1 opts) "f") 0) ; got re columns? (if (= (utl/look-for (:ifn1 opts) "f") 0) ; got re data? (if (= (utl/look-for (:ifm3 opts) "f") 0) ; got re values output columns? (if (= (utl/look-for (:ifm4 opts) "f") 0) ; got re_mixed_use_ratio columns? (let [re-in-col-nams (first (utl/fetch-csv-data (:ifm1 opts))) re-in-data (utl/fetch-csv-data (:ifn1 opts)) re-val-cols-out (first (utl/fetch-csv-data (:ifm3 opts))) mu-val-cols-out (first (utl/fetch-csv-data (:ifm4 opts))) chk-results (utl/chk-seq-len re-in-col-nams (first re-in-data) re-rec-count)] I am not looking for a discussion of the best way, but what is in Clojure that facilitates solving a problem like this.

    Read the article

  • ORA-600 Troubleshooting

    - by [email protected]
    Have you observed an ORA-0600 or ORA-07445 reported in your alert log? The ORA-600 error is the generic internal error number for Oracle program exceptions. It indicates that a process has encountered a low-level, unexpected condition. The ORA-600 error statement includes a list of arguments in square brackets: ORA 600 "internal error code, arguments: [%s], [%s],[%s], [%s], [%s]" The first argument is the internal message number or character string. This argument and the database version number are critical in identifying the root cause and the potential impact to your system.  The remaining arguments in the ORA-600 error text are used to supply further information (e.g. values of internal variables etc).   Looking for the best way to diagnose? There is an ORA-600 Troubleshooter Tool available in My Oracle Support.  This tool will lead you to applicable content in My Oracle Support on the problem and can be used to investigate the problem with argument data from the error message or you can pull out the first 10 or 15 stack pointers from the associated trace file to match up against known bugs. Note 153788.1 ORA-600/ORA-7445 TroubleshooterNote 1082674.1 A Video To Demonstrate The Usage Of The ORA-600/ORA-7445 Lookup Tool [Video] Also, take a quick look at the Master Note for Diagnosing ORA-600 ( MasterNoteORA600.docx) for some tips on diagnosing.

    Read the article

  • Writing to a D3DFMT_R32F render target clamps to 1

    - by Mike
    I'm currently implementing a picking system. I render some objects in a frame buffer, which has a render target, which has the D3DFMT_R32F format. For each mesh, I set an integer constant evaluator, which is its material index. My shader is simple: I output the position of each vertex, and for each pixel, I cast the material index in float, and assign this value to the Red channel: int ObjectIndex; float4x4 WvpXf : WorldViewProjection< string UIWidget = "None"; >; struct VS_INPUT { float3 Position : POSITION; }; struct VS_OUTPUT { float4 Position : POSITION; }; struct PS_OUTPUT { float4 Color : COLOR0; }; VS_OUTPUT VSMain( const VS_INPUT input ) { VS_OUTPUT output = (VS_OUTPUT)0; output.Position = mul( float4(input.Position, 1), WvpXf ); return output; } PS_OUTPUT PSMain( const VS_OUTPUT input, in float2 vpos : VPOS ) { PS_OUTPUT output = (PS_OUTPUT)0; output.Color.r = float( ObjectIndex ); output.Color.gba = 0.0f; return output; } technique Default { pass P0 { VertexShader = compile vs_3_0 VSMain(); PixelShader = compile ps_3_0 PSMain(); } } The problem I have, is that somehow, the values written in the render target are clamped between 0.0f and 1.0f. I've tried to change the rendertarget format, but I always get clamped values... I don't know what the root of the problem is. For information, I have a depth render target attached to the frame buffer. I disabled the blend in the render state the stencil is disabled Any ideas?

    Read the article

  • Centrino Wireless-N 1000 takes forever to connect and keeps asking for password

    - by waclock
    A few days ago I started having this problem. When I tried to connect to any WiFi Connection it would stay connecting forever, and after a minute or so it would ask me for the password again. The strange thing is that this happened out of nowhere, I did not install any new drivers or anything like that. After this happened I decided to uninstall ubuntu and install it again ("inside windows") but the problem is still there. Any suggestions would be greatly appreciated. 0: hp-wifi: Wireless LAN Soft blocked: no Hard blocked: no 1: hp-bluetooth: Bluetooth Soft blocked: yes Hard blocked: no 2: phy0: Wireless LAN Soft blocked: no Hard blocked: no description: Ethernet interface product: RTL8111/8168B PCI Express Gigabit Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:07:00.0 logical name: eth0 version: 06 serial: 2c:27:d7:aa:e4:7d size: 10Mbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=half firmware=rtl8168e-3_0.0.4 03/27/12 latency=0 link=no multicast=yes port=MII speed=10Mbit/s resources: irq:50 ioport:4000(size=256) memory:c0404000-c0404fff memory:c0400000-c0403fff *-network description: Wireless interface product: Centrino Wireless-N 1000 vendor: Intel Corporation physical id: 0 bus info: pci@0000:0d:00.0 logical name: wlan0 version: 00 serial: 00:1e:64:09:9c:58 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=iwlwifi driverversion=3.2.0-23-generic-pae firmware=39.31.5.1 build 35138 latency=0 link=no multicast=yes wireless=IEEE 802.11bgn resources: irq:52 memory:c4500000-c4501fff *-network description: Ethernet interface physical id: 1 bus info: usb@2:1.2 logical name: eth1 serial: ee:85:2f:7d:80:96 capabilities: ethernet physical configuration: broadcast=yes driver=ipheth ip=172.20.10.2 link=yes multicast=yes

    Read the article

  • Booting Error while using 12.04 booting from GRUB

    - by Paul Z.
    my name is Paul. I have encountered an issue relating to GRUB booting and the booting process in general. I have been running Ubuntu 12.04 LTS on my machine for quite a while. Before that, i had (before) 10.04, 11.04, 11.10, etc. I have been running Ubuntu, in general, but more specifically 12.04 for a long time with little to no problems. The problem: Earlier today, i was using my machine and then decided to take a little break. I shut down my machine (laptop, in case anyone was wondering) and left. Later, I came back ready to start it up and continue. I started it up and it took me to the Toshiba screen (like normal) then to the GRUB screen. I guessed that nothing was truly wrong, and chose the first option (something around the lines of: Ubuntu, with linux 3.22.0-35-generic). I waited for a bit and it still displayed the same purple screen. I restarted it and now chose the option like the first but with recovery at the end. Same result. Later, I waited longer and found that my computer came up with a bunch of lines of script. I waited longer but nothing new happened. What are your suggestions as to fix this problem? I will let my computer run overnight with the recovery setting and will let you know what the result is. Until then, please help. Thank you, your time and effort is greatly appreciated!

    Read the article

  • Adding complexity by generalising: how far should you go?

    - by marcog
    Reference question: http://stackoverflow.com/questions/4303813/help-with-interview-question The above question asked to solve a problem for an NxN matrix. While there was an easy solution, I gave a more general solution to solve the more general problem for an NxM matrix. A handful of people commented that this generalisation was bad because it made the solution more complex. One such comment is voted +8. Putting aside the hard-to-explain voting effects on SO, there are two types of complexity to be considered here: Runtime complexity, i.e. how fast does the code run Code complexity, i.e. how difficult is the code to read and understand The question of runtime complexity is something that requires a better understanding of the input data today and what it might look like in the future, taking the various growth factors into account where necessary. The question of code complexity is the one I'm interested in here. By generalising the solution, we avoid having to rewrite it in the event that the constraints change. However, at the same time it can often result in complicating the code. In the reference question, the code for NxN is easy to understand for any competent programmer, but the NxM case (unless documented well) could easily confuse someone coming across the code for the first time. So, my question is this: Where should you draw the line between generalising and keeping the code easy to understand?

    Read the article

  • How to visually "connect" skybox edges with terrain model

    - by David
    I'm working on a simple airplane game where I use skybox cube rendered using disabled depth test. Very close to the bottom side of the skybox is my terrain model. What bothers me is that the terrain is not connected to the skybox bottom. This is not visible while the plane flies low, but as it gets some altitude, the terrain looks smaller because of the perspective. Since the skybox center is always same as the camera position, the skybox moves with the plane, but the terrain goes into the distance. Ok, I think you understand the problem. My question is how to fix it. It's an airplane game so limiting max altitude is not possible. I thought about some way to stretch terrain to always cover whole bottom side of the skybox cube, but that doesn't feel right and I don't even know how would I calculate new terrain dimensions every frame. Here are some screenshot of games where you can clearly see the problem: (oops, I cannot post images yet) darker brown is the skybox bottom here: http://i.stack.imgur.com/iMsAf.png untextured brown is the skybox bottom here: http://i.stack.imgur.com/9oZr7.png

    Read the article

  • (GUIDE) How to install and configure Mariadb on Ubuntu 12.10+

    - by Myh Yazid
    First of all open terminal and type this sudo apt-get install python-software-properties I recommend you to use MariaDB version 10.0.4 Alpha because when im installed it i've got no errors compare with 5.5 version. 2: Put this commands in terminal sudo apt-get install software-properties-common sudo apt-key adv --recv-keys --keyserver hkp://keyserver.ubuntu.com:80 0xcbcb082a1bb943db sudo add-apt-repository 'deb http://download.nus.edu.sg/mirror/mariadb/repo/10.0/ubuntu quantal main' If you're using other version please change the "quantal" to your ubuntu version codename eg : 13.10 saucy 13.04 raring 12.10 quantal (im using this version) 12.04 precise` 3: Type in this commands sudo apt-get update sudo apt-get install mariadb-server 4: after finished installed mariadb you need to run this sudo mysql_install_db sudo /usr/bin/mysql_secure_installation If have problem consider look at the end of this post for solution 5: You're done!! Problems solving In step 3 if you get problem, like unmet dependencies, Go to /etc/apt/preferences.d then create new file called "mariadb" Then,Consider to put the below code in the mariadb file that you just created Package: * Pin: origin <mirror-domain> Pin-Priority: 1000 In step 4 you may get 2 errors first occur when you run sudo mysql_install_db Solution : open another terminal and do this killall mysqld 2.second eror may occur when you running sudo /usr/bin/mysql_secure_installationcommand Solution : try doing this cd /etc/init.d and try run ./mysqld start if ./mysqld doesnt exists use ./mysql start as in my case That's All Thank you for reading. I wrote this based on my experience installing this.Any errors you get community here can help you if i cant..tq .

    Read the article

< Previous Page | 567 568 569 570 571 572 573 574 575 576 577 578  | Next Page >