Search Results

Search found 3618 results on 145 pages for 'huge'.

Page 9/145 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • Java Plugin a huge security risk? How to preseve Java plugin from privilege escalation?

    - by Johannes Weiß
    Installing a regular Java plugin is IMHO a real security risk for non-IT people. Normally Java applets run in a sandbox and the applet cannot do anything harmful to your computer. If an applet, however, needs to do something like read-only accessing your filesystem e.g. uploading an image, you have to give it more privileges. Usually that's ok but I think not everyone knows that you give the applet the same privileges to your computer as your user has! And that's everything Java asks you: That looks as 'harmful' as a self-signed SSL certificate on a random page where no sensitive data is exchanged. The user will click on Run! You can try that at home using JyConsole, that's Jython (Python on Java)! Simply type in python code, e.g. import os os.system('cat /etc/passwd') or worse DON'T TYPE IN THAT CODE ON YOUR COMPUTER!!! import os os.system('rm -rf ~') ... Does anyone know how you can disable the possibily of privilege escalation? And by the way, does anyone know why SUN displays only a dialog as harmless as the one shown above (the self-signed-SSL-certificate-dialog from Firefox 3 and above is much clearer here!)? Live sample from my computer:

    Read the article

  • How can I recover a huge folder that's been converted to a zero kb file on an NTFS partition?

    - by aalaap
    I have a 1TB drive with two 500GB partitions. One of them is being used as a Mac OS X Time Machine back up drive and the other one was NTFS and being used for storage. I had my entire 'iTunes Music' folder stored on it. Recently, there were some errors on the NTFS drive that caused chkdsk to run when in Windows, and it removed a lot of corrupt files. In this process, it converted my 'iTunes Music' folder into one zero KB file. How can I recover this? The partitions are intact and the other data on the disk is still accessible. It's just the 'iTunes Music' folder that's gone.

    Read the article

  • Huge or minimal performance hit running game servers on a Virtual Machine? [closed]

    - by Damainman
    I have a two dedicated servers to choose from depending on which one would do a better job. I plan on updating the Hard Drive space and RAM at a later date depending on how I move forward. Server 1: 500GB Hard Drive 8GB RAM 2x 64bit Intel Xeon L5420(Quad Core) @ 2.50Ghz Server2: 500GB Hard Drive 8GB RAM 2x 64bit Intel Xeon E5420(Quad Core) @ 2.50GHz I want to run a virtual machine that will host about 10 game servers, with about 16 active slots per server. It will be a mix and match from: Minecraft Counter Strike( 1.6, Source, Global Offensive) Battlefield Team Fortress I know the general consensus is virtualization is a horrible idea if you plan on running virtual servers on them. The issue is, the discussions I read do not really clearly state whether they are speaking about a virtual server running inside an OS(ie: VMware Player running on Windows with the game server in a VM) or a Hypervisor such as Xen Cloud Platform. I am trying to get a definite answer on how feasible the above would be and how much of a performance hit it might be if the VM running the game servers is on a hypervisor such as Xen Cloud Platform. My initial research lead me to believe that there wouldn't be a performance hit since the virtualization is different than running it via inside of a OS.

    Read the article

  • What could cause a huge packet loss in Ubuntu 9.10, for both wired and wireless?

    - by xzenox
    I was previously using 9.04 fine (and in fact, I am posting this from my old 9.04 live cd). I tested the following install steps in a virtualbox vm prior to following the sames ones to upgrade my laptop: Download/burn ubuntu minimal cd (12mb one) Install ubuntu minimal sudo apt-get update sudo apt-get upgrade sudo apt-get ubuntu-desktop ubuntu-standard In the VM worked fine and I found myself with a working 9.10 ubuntu, network worked fine and I was able to test my backups and DropBox without a hitch (host was 9.04). When I followed the same steps on my laptop, everything worked up to after 9.10 being installed and working. As far as I can tell, everything besides eth0/wireless works. For some reason, I am unable to access the internet. Ping reports that over 99% of packets get lost (over an hour or so of pinging). This means for example that if I try hard enough, I can load a webpage but only at the cost of much patience... This happens both for a wired and wireless connection to my wrt310n (updated with latest firmware). At first I thought that it could be related to the ipv6 issues ppl have been experiencing however even after disabling ipv6 at the kernel level (through grub), I still get the issue. I do not think this is related to DNS issues or the likes since even when I ping my ISP's gateway IP, I have the same amount of packet loss. No DNS resolving should be required there. Access to my router works peachy with no packet loss there. I've tried different MTU values but to no avail. Note that this issue affects every web-enabled application: firefox, ping, synaptic, etc. The same hardware/router combo works with 9.04 but not with 9.10. In fact, when I did: sudo apt-get ubuntu-desktop ubuntu-standard after 9.10 minimal was installed, it downloaded over 400mb of packages without a hitch so my guess is that one of those packages either in ubuntu-desktop or ubuntu-standard is causing havok. Thoughts?

    Read the article

  • How do I upload huge files across the internet without using P2P?

    - by Brien Malone
    I work remotely and have 44GB of media files that I need to send back to my office. There are lots of free services out there that can handle up to 2GB, but I haven't seen talk of anything larger. We both have 50mbps+ connections, so I would rather not mail physical media (though, that is an option). Bittorrent is blocked at my corporate headquarters. We have an FTP server, but the per-user cap is 10GB. I use Citrix, but throughput is throttled to 3mbps. (44gb @ 50mbps = 4 to 5 hours... @3mbps = 5 or 6 days.) Any suggestions appreciated. Windows 7 and Windows 2003 Server are the OSes Involved I have tried JetBytes and it is blocked by our content filter

    Read the article

  • How to efficiently dump a huge MySQL innodb database?

    - by Jagbir
    I got an Ubuntu 10.04 production MySQL database server where total size of database is 260 GB while size of root partition is itself 300 GB where DB is stored, essentially means around 96% of / is full and there's no space left for storing dump/backup etc. No other disk is attached to server as of now. My task is to migrate this database to other server sitting in different datacenter. Question is how to do that efficiently with minimum downtime? I'm thinking in line of: Request to attach an extra drive to server and take a dump in that drive. Transfer dump to new server, restore it and make new server slave of existing one to keep data in sync When migration is needed, break replication, update slave config to accept read/write requests and make old server read-only so it won't entertain any write requests and tell app developers to update there config with new IP address for db. What's your suggestions to improve this or any alternate better approach for this task?

    Read the article

  • What Windows app can sort a huge XML file?

    - by Torben Gundtofte-Bruun
    I have some enormous XML-based configuration files, with 125000 lines in them. The problem is that they are auto-generated by the system I use, and "child" tags are in a random order within their respective parent tag. This means that a diff comparison is impossible. I want to recursively sort all tags within a parent tag by the value in name="". Some parent tags only appear once and don't have a name="" parameter; these should be sorted by the tag name itself. Once the files are sorted like this, they can be compared quite easily using normal tools. We are currently using ExamXML which can match unsorted XML files, but it fails because the files are too big. Is there an application that can do this? (Windows much preferred; Linux only as a last resort) I do not want to dive into development or XSLT jobs. I am thinking that someone must have made a simple sorting tool like this already - I just can't find it using Google. Update: With help from this site, I created a small package that I want to share: XML-Sorter_v0.3.zip Update: Follow-up question here.

    Read the article

  • How do I find the cause for a huge difference in performance between two identical Ubuntu servers?

    - by the.duckman
    I am running two Dell R410 servers in the same rack of a data center. Both have the same hardware configuration, run Ubuntu 10.4, have the same packages installed and run the same Java web servers. No other load. One of them is 20-30% faster than the other, very consistently. I used dstat to figure out, if there are more context switches, IO, swapping or anything, but I see no reason for the difference. With the same workload, (no swapping, virtually no IO), the cpu usage and load is higher on one server. So the difference appears to be mainly CPU bound, but while a simple cpu benchmark using sysbench (with all other load turned off) did yield a difference, it was only 6%. So maybe it is not only CPU but also memory performance. I tried to figure out if the BIOS settings differ in some parameter, did a dump using dmidecode, but that yielded no difference. I compared /proc/cpuinfo, no difference. I compared the output of cpufreq-info, no difference. I am lost. What can I do, to figure out, what is going on?

    Read the article

  • how to find out what is causing huge dentry_cache usage?

    - by Piavlo
    Note that inode_cache & ext3_inode_cache slabs are very small compared to dentry_cache. What happens is that slowly and steadily the within a week dentry_cache grows from 1M to ~5-6G Then I need to run echo 2 > /proc/sys/vm/drop_caches && echo 0 > /proc/sys/vm/drop_caches This started to happening one day on all servers hosting some web code - the developers are saying that they have not changed anything related to filesystem access pattern around the time then the problem started. The system is centos5 with 2.6.18 kernel so I don't have any instrumentation features available th newer kernels. Any I idea how I can debug the problem? maybe with systemtap? This is a ec2 instance - so not even sure that systemtap will work there. Thanks Alex

    Read the article

  • Should tests be in the same ruby file or in separeted ruby files?

    - by Junior Mayhé
    While using Selenium and Ruby to do some functional tests, I am worried with the performance. So is it better to add all test methods in the same ruby file, or I should put each one in separated code files? Below a sample with all tests in the same file: # encoding: utf-8 require "selenium-webdriver" require "test/unit" class Tests < Test::Unit::TestCase def setup @driver = Selenium::WebDriver.for :firefox @base_url = "http://mysite" @driver.manage.timeouts.implicit_wait = 30 @verification_errors = [] @wait = Selenium::WebDriver::Wait.new :timeout => 10 end def teardown @driver.quit assert_equal [], @verification_errors end def element_present?(how, what) @driver.find_element(how, what) true rescue Selenium::WebDriver::Error::NoSuchElementError false end def verify(&blk) yield rescue Test::Unit::AssertionFailedError => ex @verification_errors << ex end def test_1 @driver.get(@base_url + "/") # a huge test here end def test_2 @driver.get(@base_url + "/") # a huge test here end def test_3 @driver.get(@base_url + "/") # a huge test here end def test_4 @driver.get(@base_url + "/") # a huge test here end def test_5 @driver.get(@base_url + "/") # a huge test here end end

    Read the article

  • Should tests be in the same Ruby file or in separated Ruby files?

    - by Junior Mayhé
    While using Selenium and Ruby to do some functional tests, I am worried with the performance. So is it better to add all test methods in the same Ruby file, or I should put each one in separated code files? Below a sample with all tests in the same file: # encoding: utf-8 require "selenium-webdriver" require "test/unit" class Tests < Test::Unit::TestCase def setup @driver = Selenium::WebDriver.for :firefox @base_url = "http://mysite" @driver.manage.timeouts.implicit_wait = 30 @verification_errors = [] @wait = Selenium::WebDriver::Wait.new :timeout => 10 end def teardown @driver.quit assert_equal [], @verification_errors end def element_present?(how, what) @driver.find_element(how, what) true rescue Selenium::WebDriver::Error::NoSuchElementError false end def verify(&blk) yield rescue Test::Unit::AssertionFailedError => ex @verification_errors << ex end def test_1 @driver.get(@base_url + "/") # a huge test here end def test_2 @driver.get(@base_url + "/") # a huge test here end def test_3 @driver.get(@base_url + "/") # a huge test here end def test_4 @driver.get(@base_url + "/") # a huge test here end def test_5 @driver.get(@base_url + "/") # a huge test here end end

    Read the article

  • Silverlight 4. Activator.CreateInstance uses a huge amount of memory

    - by Marco
    Hi, I have been playing a bit with Silverlight and try to port my Silverlight 3.0 application to Silverlight 4.0. My application loads different XAP files and upon a user request create an instance of a Xaml user control and adds it to the main container, in a sort of MEF approach in order I can have an extensible and pluggable application. The application is pretty huge and to keep acceptable the performances and the initial loading I have built up some helper classes to load in the background all pages and user controls that might be used later on. On Silverlight 3.0 everything was running smoothly without any problem so far. Switching to SL 4.0 I have noticed that when the process approaches to create the instances of the user controls using Activator.CreateInstance, the layout freezes unexpectedly for a minute and sometimes for more. Looking at the task manager the memory usage of IE jumps from 50MB to 400MB and sometimes to 1.5 GB. If the process won't take that much the layout is rendered properly and the memory falls back to 50 MB. Otherwise everything crashes due to out of memory exeption. Does anybody encountered the same problem? Or has anybody a solution about this tricky issue? Thans in advance Marco

    Read the article

  • Any way to avoid creating a huge C# COM interface wrapper when only a few methods needed?

    - by Paul Accisano
    Greetings all, I’m working on a C# program that requires being able to get the index of the hot item in Windows 7 Explorer’s new ItemsView control. Fortunately, Microsoft has provided a way to do this through UI Automation, by querying custom properties of the control. Unfortunately, the System.Windows.Automation namespace inexplicably does not seem to provide a way to query custom properties! This leaves me with the undesirable position of having to completely ditch the C# Automation namespace and use only the unmanaged COM version. One way to do it would be to put all the Automation code in a separate C++/CLI module and call it from my C# application. However, I would like to avoid this option if possible, as it adds more files to my project, and I’d have to worry about 32/64-bit problems and such. The other option is to make use of the ComImport attribute to declare the relevant interfaces and do everything through COM-interop. This is what I would like to do. However, the relevant interfaces, such as IUIAutomation and IUIAutomationElement, are FREAKING HUGE. They have hundreds of methods in total, and reference tons and tons of interfaces (which I assume I would have to also declare), almost all of which I will never ever use. I don’t think the UI Automation interfaces are declared in any Type Library either, so I can’t use TLBIMP. Is there any way I can avoid having to manually translate a bajillion method signatures into C# and instead only declare the ten or so methods I actually need? I see that C# 4.0 added a new “dynamic” type that is supposed to ease COM interop; is that at all relevant to my problem? Thanks

    Read the article

  • What is the fastest way for reading huge files in Delphi?

    - by dummzeuch
    My program needs to read chunks from a huge binary file with random access. I have got a list of offsets and lengths which may have several thousand entries. The user selects an entry and the program seeks to the offset and reads length bytes. The program internally uses a TMemoryStream to store and process the chunks read from the file. Reading the data is done via a TFileStream like this: FileStream.Position := Offset; MemoryStream.CopyFrom(FileStream, Size); This works fine but unfortunately it becomes increasingly slower as the files get larger. The file size starts at a few megabytes but frequently reaches several tens of gigabytes. The chunks read are around 100 kbytes in size. The file's content is only read by my program. It is the only program accessing the file at the time. Also the files are stored locally so this is not a network issue. I am using Delphi 2007 on a Windows XP box. What can I do to speed up this file access?

    Read the article

  • Need help debugging a huge chunk of JSON data...

    - by meder
    I have a huge chunk, so large that I can't manually edit the file and need to read it in and do regex operations to see what's wrong. Basically - my server is PHP 5.1.6 and I can't update it. This features an older json_decode which is less featured than the 5.2/5.3 versions. json_decode returns NULL and json_last_error is being invoked but the function doesn't exist except in PHP 5.3 so I'm manually trying to see what's wrong. $regex = '#[^0-9"$a-zA-Z{:}().]#'; $json = preg_replace( $regex, '', $json ); $tree = json_decode ( $json, true ); var_dump($tree); // NULL A snippet of the JSON.. somewhere in the middle {"109":0,"103":1,"102":59,"101":70,"100":4299,"94":0,"50":51,"46":0,"45":0,"44":0,"43":0,"42":0,"23":0,"22":0,"18":0,"17":1,"16":1,"13":160,"8":4298}},"2":{"d":{"109":0,"103":92,"102":54,"101":53,"100":4301,"94":0,"50":4278,"49":328,"46":1,"45":0,"44":1,"43":0,"42":0,"26":0,"23":0,"22":0,"18":0,"17":1,"16":1,"8":4300},"m":{"94":1,"100":1,"26":1,"50":1,"8":1,"49":1,"18":1,"43":1,"42":1,"109":1},"c":{"\/":{"d":{"109":0,"100":4301,"94":0,"50":4278,"49":328,"43":0,"42":0,"26":0,"18":0,"8":4300}},"G":{"d":{"109":1,"100":4303,"94":1,"68":17,"50":64,"49":53,"43":1,"42":1,"34":0,"18":1,"13":2216,"11":0,"8":4302}}}},"3": The }}}} is suspicious but this probably just closes 4 nested object literals. Would appreciate any insight.

    Read the article

  • Refactoring huge if ( ... instanceof ...)

    - by Chris
    I'm currently trying to refactor a part of a project that looks like this: Many classes B extends A; C extends A; D extends C; E extends B; F extends A; ... And somewhere in the code: if (x instanceof B){ B n = (B) x; ... }else if (x instanceof C){ C n = (C) x; ... }else if (x instanceof D){ D n = (D) x; ... }else if (x instanceof E){ E n = (E) x; ... }else if (x instanceof G){ G n = (G) x; ... }... Above if-construct currently sits in a function with a CC of 19. Now my question is: Can I split this if-construct up in mutliple functions and let Java's OO do the magic? Or are there any catches I have to look out for? My idea: private void oopMagic(C obj){ ... Do the stuff from the if(x instanceof C) here} private void oopMagic(D obj){ ... Do the stuff from the if(x instanceof D) here} private void oopMagic(E obj){ ... Do the stuff from the if(x instanceof E) here} .... and instead of the huge if: oopMagic(x);

    Read the article

  • SSMS Tools Pack 2.0 is out! With huge productivity booster features that will blow your mind and ease your job even more.

    - by Mladen Prajdic
    What better way to end the summer and start those productive autumn days ahead than with a fresh new version of the SSMS Tools Pack. This is a big release with two new features that are huge productivity boosters. First new feature are Tab Sessions. Every SQL tab you open is saved every N (default 2) minutes and is stored in a session. This works similar to internet browser sessions. Once you reopen SSMS you can restores your last session with a click of a button. You even get every window connected to the server it was previously connected to. The Tab History Window looks like this:   The second feature is Execution Plan Analyzer. It is designed to quickly help you find costliest operators by a number of properties. If that's not enough you can easily search through the whole execution plan for whatever you like. And to top it off you can auto analyze the execution plan. The analysis reports various problems the execution plan has and suggests a most common solution. The ultimate purpose of the Execution Plan Analyzer is to make your troubleshooting quicker and easier. It uses a simple user interface that is easy to navigate and is built directly into the execution plan itself. The execution plan analyzer looks like this:   Smaller fixes include a completely redesigned SQL History Search window and various other bug fixes. You can download the new version 2.0 at the Download page. For more detailed feature descriptions go to the main Features Page. Enjoy it!

    Read the article

  • Why does use of H264 in sender/receiver pipelines introduce just HUGE delay?

    - by Serguey Zefirov
    When I try to create pipeline that uses H264 to transmit video, I get some enormous delay, up to 10 seconds to transmit video from my machine to... my machine! This is unacceptable for my goals and I'd like to consult StackOverflow over what I (or someone else) do wrong. I took pipelines from gstrtpbin documentation page and slightly modified them to use Speex: This is sender pipeline: #!/bin/sh gst-launch -v gstrtpbin name=rtpbin \ v4l2src ! ffmpegcolorspace ! ffenc_h263 ! rtph263ppay ! rtpbin.send_rtp_sink_0 \ rtpbin.send_rtp_src_0 ! udpsink host=127.0.0.1 port=5000 \ rtpbin.send_rtcp_src_0 ! udpsink host=127.0.0.1 port=5001 sync=false async=false \ udpsrc port=5005 ! rtpbin.recv_rtcp_sink_0 \ pulsesrc ! audioconvert ! audioresample ! audio/x-raw-int,rate=16000 ! \ speexenc bitrate=16000 ! rtpspeexpay ! rtpbin.send_rtp_sink_1 \ rtpbin.send_rtp_src_1 ! udpsink host=127.0.0.1 port=5002 \ rtpbin.send_rtcp_src_1 ! udpsink host=127.0.0.1 port=5003 sync=false async=false \ udpsrc port=5007 ! rtpbin.recv_rtcp_sink_1 Receiver pipeline: !/bin/sh gst-launch -v\ gstrtpbin name=rtpbin \ udpsrc caps="application/x-rtp,media=(string)video, clock-rate=(int)90000, encoding-name=(string)H263-1998" \ port=5000 ! rtpbin.recv_rtp_sink_0 \ rtpbin. ! rtph263pdepay ! ffdec_h263 ! xvimagesink \ udpsrc port=5001 ! rtpbin.recv_rtcp_sink_0 \ rtpbin.send_rtcp_src_0 ! udpsink port=5005 sync=false async=false \ udpsrc caps="application/x-rtp,media=(string)audio, clock-rate=(int)16000, encoding-name=(string)SPEEX, encoding-params=(string)1, payload=(int)110" \ port=5002 ! rtpbin.recv_rtp_sink_1 \ rtpbin. ! rtpspeexdepay ! speexdec ! audioresample ! audioconvert ! alsasink \ udpsrc port=5003 ! rtpbin.recv_rtcp_sink_1 \ rtpbin.send_rtcp_src_1 ! udpsink host=127.0.0.1 port=5007 sync=false async=false Those pipelines, a combination of H263 and Speex, work fine enough. I snap my fingers near camera and micropohne and then I see movement and hear sound at the same time. Then I changed pipelines to use H264 along the video path. The sender becomes: #!/bin/sh gst-launch -v gstrtpbin name=rtpbin \ v4l2src ! ffmpegcolorspace ! x264enc bitrate=300 ! rtph264pay ! rtpbin.send_rtp_sink_0 \ rtpbin.send_rtp_src_0 ! udpsink host=127.0.0.1 port=5000 \ rtpbin.send_rtcp_src_0 ! udpsink host=127.0.0.1 port=5001 sync=false async=false \ udpsrc port=5005 ! rtpbin.recv_rtcp_sink_0 \ pulsesrc ! audioconvert ! audioresample ! audio/x-raw-int,rate=16000 ! \ speexenc bitrate=16000 ! rtpspeexpay ! rtpbin.send_rtp_sink_1 \ rtpbin.send_rtp_src_1 ! udpsink host=127.0.0.1 port=5002 \ rtpbin.send_rtcp_src_1 ! udpsink host=127.0.0.1 port=5003 sync=false async=false \ udpsrc port=5007 ! rtpbin.recv_rtcp_sink_1 And receiver becomes: #!/bin/sh gst-launch -v\ gstrtpbin name=rtpbin \ udpsrc caps="application/x-rtp,media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264" \ port=5000 ! rtpbin.recv_rtp_sink_0 \ rtpbin. ! rtph264depay ! ffdec_h264 ! xvimagesink \ udpsrc port=5001 ! rtpbin.recv_rtcp_sink_0 \ rtpbin.send_rtcp_src_0 ! udpsink port=5005 sync=false async=false \ udpsrc caps="application/x-rtp,media=(string)audio, clock-rate=(int)16000, encoding-name=(string)SPEEX, encoding-params=(string)1, payload=(int)110" \ port=5002 ! rtpbin.recv_rtp_sink_1 \ rtpbin. ! rtpspeexdepay ! speexdec ! audioresample ! audioconvert ! alsasink \ udpsrc port=5003 ! rtpbin.recv_rtcp_sink_1 \ rtpbin.send_rtcp_src_1 ! udpsink host=127.0.0.1 port=5007 sync=false async=false This is what happen under Ubuntu 10.04. I didn't noticed such huge delays on Ubuntu 9.04 - the delays there was in range 2-3 seconds, AFAIR.

    Read the article

  • How do you use asynchronous ORMs without huge callback chains?

    - by hornairs
    I'm using the relatively immature Joose Javascript ORM plugin (project page) to persist objects in an Appcelerator Titanium (company page) mobile project. Since it's client side storage, the application has to check to see if the database is initialized before starting up the ORM since it inspects the DB tables to construct the classes. My problem is that this sequence of operations (and if this one is like this, other things down the road) takes a lot of callbacks to complete. I have a lot of jumping around in the code that isn't apparent to a maintainer and results in some complex call graphs and whatnot. So, I ask these questions: How would you asynchronously initialize a database and populate it with seed data using an ORM that needs the schema to be correct to function? Do you have any general strategies or links for async/event driven programming and keeping the call graph simple and understandable? Do you have any suggestions for Javascript ORMs/meta object systems that work with HTML 5 as a storage engine and are hopefully framework agnostic? Am I just a big newb and should be able to work this out with ease? Thanks folks!

    Read the article

  • Why does my DHTML menu have huge layout issues in IE7?

    - by webfac
    This one's got me stumped. Usually with a little CSS juggling here and there I am able to solve most IE7 CSS bugs, but not this one! Head on over to the example page and view it in IE7, you will soon see that when mousing over the (vertical) drop down menu on the left of the page, it opens the sub menu WAY over to the right. I have pulled every last hair out! If it helps, the menu was made with 'Sothink DHTML Menu 9' As always, all replies are much appreciated.

    Read the article

  • Help me to simplify my jQuery, it's growing huge and redundant!

    - by liquilife
    Hey all, I am no jQuery expert, but I'm learning. I'm using a bit (growing to a LOT) of jQuery to hide some images and show a single image when a thumb is clicked. While this bit of jQuery works, it's horribly inefficient but I am unsure of how to simplify this to something that works on more of a universal level. <script> $(document).ready(function () { // Changing the Materials $("a#shirtred").click(function () { $("#selectMaterials img").removeClass("visible"); $("img.selectShirtRed").addClass("visible"); }); $("a#shirtgrey").click(function () { $("#selectMaterials img").removeClass("visible"); $("img.selectShirtGrey").addClass("visible"); }); $("a#shirtgreen").click(function () { $("#selectMaterials img").removeClass("visible"); $("img.selectShirtGreen").addClass("visible"); }); $("a#shirtblue").click(function () { $("#selectMaterials img").removeClass("visible"); $("img.selectShirtBlue").addClass("visible"); }); // Changing the Collars $("a#collarred").click(function () { $("#selectCollar img").removeClass("visible"); $("img.selectCollarRed").addClass("visible"); }); $("a#collargrey").click(function () { $("#selectCollar img").removeClass("visible"); $("img.selectCollarGrey").addClass("visible"); }); $("a#collargreen").click(function () { $("#selectCollar img").removeClass("visible"); $("img.selectCollarGreen").addClass("visible"); }); $("a#collarblue").click(function () { $("#selectCollar img").removeClass("visible"); $("img.selectCollarBlue").addClass("visible"); }); // Changing the Cuffs $("a#cuffred").click(function () { $("#selectCuff img").removeClass("visible"); $("img.selectCuffRed").addClass("visible"); }); $("a#cuffgrey").click(function () { $("#selectCuff img").removeClass("visible"); $("img.selectCuffGrey").addClass("visible"); }); $("a#cuffblue").click(function () { $("#selectCuff img").removeClass("visible"); $("img.selectCuffBlue").addClass("visible"); }); $("a#cuffgreen").click(function () { $("#selectCuff img").removeClass("visible"); $("img.selectCuffGreen").addClass("visible"); }); // Changing the Pockets $("a#pocketred").click(function () { $("#selectPocket img").removeClass("visible"); $("img.selectPocketRed").addClass("visible"); }); $("a#pocketgrey").click(function () { $("#selectPocket img").removeClass("visible"); $("img.selectPocketGrey").addClass("visible"); }); $("a#pocketblue").click(function () { $("#selectPocket img").removeClass("visible"); $("img.selectPocketBlue").addClass("visible"); }); $("a#pocketgreen").click(function () { $("#selectPocket img").removeClass("visible"); $("img.selectPocketGreen").addClass("visible"); }); }); </scrip> <!-- Thumbnails which can be clicked on to toggle the larger preview image --> <div class="materials"> <a href="javascript:;" id="shirtgrey"><img src="/grey_shirt.png" height="122" width="122" /></a> <a href="javascript:;" id="shirtred"><img src="red_shirt.png" height="122" width="122" /></a> <a href="javascript:;" id="shirtblue"><img src="hblue_shirt.png" height="122" width="122" /></a> <a href="javascript:;" id="shirtgreen"><img src="green_shirt.png" height="122" width="122" /></a> </div> <div class="collars"> <a href="javascript:;" id="collargrey"><img src="grey_collar.png" height="122" width="122" /></a> <a href="javascript:;" id="collarred"><img src="red_collar.png" height="122" width="122" /></a> <a href="javascript:;" id="collarblue"><img src="blue_collar.png" height="122" width="122" /></a> <a href="javascript:;" id="collargreen"><img src="green_collar.png" height="122" width="122" /></a> </div> <div class="cuffs"> <a href="javascript:;" id="cuffgrey"><img src="grey_cuff.png" height="122" width="122" /></a> <a href="javascript:;" id="cuffred"><img src="red_cuff.png" height="122" width="122" /></a> <a href="javascript:;" id="cuffblue"><img src="blue_cuff.png" height="122" width="122" /></a> <a href="javascript:;" id="cuffgreen"><img src="/green_cuff.png" height="122" width="122" /></a> </div> <div class="pockets"> <a href="javascript:;" id="pocketgrey"><img src="grey_pocket.png" height="122" width="122" /></a> <a href="javascript:;" id="pocketred"><img src=".png" height="122" width="122" /></a> <a href="javascript:;" id="pocketblue"><img src="blue_pocket.png" height="122" width="122" /></a> <a href="javascript:;" id="pocketgreen"><img src="green_pocket.png" height="122" width="122" /></a> </div> <!-- The larger images where one from each set should be viewable at one time, triggered by the thumb clicked above --> <div class="selectionimg"> <div id="selectShirt"> <img src="grey_shirt.png" height="250" width="250" class="selectShirtGrey show" /> <img src="red_shirt.png" height="250" width="250" class="selectShirtRed hide" /> <img src="blue_shirt.png" height="250" width="250" class="selectShirtBlue hide" /> <img src="green_shirt.png" height="250" width="250" class="selectShirtGreen hide" /> </div> <div id="selectCollar"> <img src="grey_collar.png" height="250" width="250" class="selectCollarGrey show" /> <img src="red_collar.png" height="250" width="250" class="selectCollarRed hide" /> <img src="blue_collar.png" height="250" width="250" class="selectCollarBlue hide" /> <img src="green_collar.png" height="250" width="250" class="selectCollarGreen hide" /> </div> <div id="selectCuff"> <img src="grey_cuff.png" height="250" width="250" class="selectCuffGrey show" /> <img src="red_cuff.png" height="250" width="250" class="selectCuffRed hide" /> <img src="blue_cuff.png" height="250" width="250" class="selectCuffBlue hide" /> <img src="green_cuff.png" height="250" width="250" class="selectCuffGreen hide" /> </div> <div id="selectPocket"> <img src="grey_pocket.png" height="250" width="250" class="selectPocketGrey show" /> <img src="hred_pocket.png" height="250" width="250" class="selectPocketRed hide" /> <img src="blue_pocket.png" height="250" width="250" class="selectPocketBlue hide" /> <img src="green_pocket.png" height="250" width="250" class="selectPocketGreen hide" /> </div> </div>

    Read the article

  • How to apply Data Mining (Association Rule) to a huge database ?

    - by stckvrflw
    Hello What I want to do is to apply Association method of data mining on my SQL Server 2000 database. Association rule is something like "finding the most frequent items that appear together in database." For those who don't know or who want to remember what is association method is like, take a look at this presentation about Association rule in Data Mining. www.authorstream.com/Presentation/a.besimi-233030-data-mining-intro-seeu-education-ppt-powerpoint 17th slide gives a nice example of applying association rule on a database. So Can you help me about how should I write my SQL codes (If they will be sufficient of course) Thanks.

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >