Daily Archives

Articles indexed Wednesday December 22 2010

Page 2/32 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • testng multiple suites

    - by Eli
    Hi people. my problem is as follows: i am testing a web-ui using selenium and testng. i have a test suite with many test classes in it. i have a method with the @BeforeSuite witch also has a @Parameters annotation, this method recieves as a parameter the browser in witch the selenium will test by run,executing the lines: selenium = new DefaultSelenium("localhost", 4444, **browser**, "http://localhost:8099"); selenium.start(); the xml im using to run the test suite is: <suite name="suite"> <parameter name = "browser" value = "*firefox"/> <test name="allTests"> <classes> <class name="test.webui.MemcachedDeploymentTest" /> </classes> </test> </suite> this works fine and the test runs in firefox. my problem is that i would like to somehow run this suite again, immediatly after the first run finishes, but this time with chrome as the browser. i now have 2 xml suites, one with chrome and one with firefox, is there any way to run these test suites one after the other automatically? maybe using a third xml? Thanks in advance

    Read the article

  • zoomfactor value in CGAffineTransformMakeScale in iPhone

    - by suse
    Hello, 1) I'm doing pinch zoom on the UIImageView , how should i decide upon the zoomfactor value, because when the zoomfactor value goes beyond 0[i.e negative value]the image is gettig tilted, which i dont want it to happen. how to avoid this situation. 2) Y is the flickring kind of rotationis happening, Y not the smooth rotation? ll this be taken care by CGAffineTransformMakeScale(zoomfactor,zoomfactor);method? This is what i'm doing in my code: zoomFactor = 0;// Initially zoomfactor is set to zero - (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event{ NSLog(@" Inside touchesBegan .................."); NSArray *twoTouches = [touches allObjects]; UITouch *first = [twoTouches objectAtIndex:0]; OPERATION = [self identifyOperation:touches :first]; NSLog(@"OPERATION : %d",OPERATION); if(OPERATION == OPERATION_PINCH){ //double touch pinch UITouch *second = [twoTouches objectAtIndex:1]; f_G_initialDistance = distanceBetweenPoints([first locationInView:self.view],[second locationInView:self.view]); } NSLog(@" leaving touchesBegan .................."); } - (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event { NSLog(@" Inside touchesMoved ................."); NSArray *twoTouchPoints = [touches allObjects]; if(OPERATION == OPERATION_PINCH){ CGFloat currentDistance = distanceBetweenPoints([[twoTouchPoints objectAtIndex:0] locationInView:self.view],[[twoTouchPoints objectAtIndex:1] locationInView:self.view]); int pinchOperation = [self identifyPinchOperation:f_G_initialDistance :currentDistance]; G_zoomFactor = [self calculateZoomFactor:pinchOperation :G_zoomFactor]; [uiImageView_G_obj setTransform:CGAffineTransformMakeScale(G_zoomFactor, G_zoomFactor)]; [self.view bringSubviewToFront:resetButton]; [self.view bringSubviewToFront:uiSlider_G_obj]; f_G_initialDistance = currentDistance; } NSLog(@" leaving touchesMoved .................."); } - (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event { NSLog(@" Inside touchesEnded .................."); NSArray *twoTouches = [touches allObjects]; UITouch *first = [twoTouches objectAtIndex:0]; if(OPERATION == OPERATION_PINCH){ //do nothing } NSLog(@" Leaving touchesEnded .................."); } Thank You.

    Read the article

  • joomla article list with thumbnails

    - by user541918
    Title of the article Author Hits 1 Restaurante Al Cambio Administrator 24 2 Convencion Verano 2010 Administrator 50 3 Ile Aiye & Ketubara Administrator 54 I have article list with this format but I want small thumbnail to each article instead of numbers 1,2,3,....If anyone have idea about component/plugin/module available in joomla to show article list with thumbnails instead of numbers inform me immediately. Thank You.

    Read the article

  • E a qualidade por trás?

    - by anobre
    Olá pessoal! Hoje o assunto não é código, mas sim a qualidade dele. Recentemente aqui na NBR começamos com um cliente um contrato de manutenção e migração de 2 projetos existentes. A nossa surpresa aconteceu quando tivemos acesso ao código-fonte dos projetos. E aí entra o assunto deste post… Quão importante é a qualidade do código-fonte nos projetos? A grande questão aqui neste caso específico é a seguinte: o layout é aceitável, planejado, onde pudemos perceber certa preocupação. Mas e o código por trás? Entre GoTo, banco de dados em Access, MySql e SQL Server no mesmo projeto (sem necessidade), abordagem 100% procedural, sem reutilização de código e ambientes dinâmicos, este post é mais um desabafo e uma preocupação do que qualquer coisa. Nós como desenvolvedores natos temos que ter uma preocupação básica: estou fazendo meu trabalho corretamente ou estou me livrando dele? Muitos clientes não analisam o código por trás dos seus projetos. Basta a interface cumprir o que foi prometido (ou quase cumprir) que está tudo certo. E qual é o preço de um código mal feito? A manutenção é tão importante quando o desenvolvimento de um novo projeto. O ponto mestre é defender isto para os possíveis clientes e provar, para os já clientes, que isto tem valor. No nosso dia-a-dia tentamos apresentar aos clientes (quando eles estão interessados) que nosso código é bem feito. E isto não depende do projeto, do cliente ou do desenvolvedor: uma interface bem feita é tão importante quanto seu código. Qualquer um dos dois pode acabar com seu projeto. Mas confesso que o mais dificil nisto tudo é defender que a qualidade tem preço e a sua importancia, para aqueles clientes que acham que não é necessário. Como você defende este ponto de vista? Vamos deixar claro: software bem feito não é barato! E definitivamente não existe a opção “sem qualidade”. Abraços!

    Read the article

  • Can I ? - Develop C/C++ in Fedora 14 using GCC 4.4 and deploy in CentOS 5.2 using GCC 4.4

    - by Amit Phatarphekar
    Hello - My production env runs CentOS 5.2 and 5.5 versions. I have to develop a new tool using C/C++ and deploy it on this production env. I was planning to use Fedora 14 on my desktop with GCC 4.4 to do the development with Eclipse IDE. And then later I want to deploy the executables to CentOS 5.2 or 5.5 The production env will have GCC 4.4 as well. Since both Fedora and CentOS are RHEL based, I thought this is possible. So can I do this? Or do I need to have CentOS 5.2/5.5 on my development desktop as well? Thanks Amit

    Read the article

  • Meaning of tcp_delack_min

    - by Phi
    Hi, the current Linux Kernel (e.g. 2.6.36) uses Delayed Acknowledgments (delack). In /include/net/tcp.h it says: define TCP_DELACK_MIN ((unsigned)(HZ/25)) So, for a Kernel using a HZ value of 1000, an ACK should be delayed by a minimum of 40 ms. However, RFC 2581 says a TCP implementation should acknowledge every second full sized segment without further delay. Does anybody know whether the Linux Kernel follows that 'should' or whether the TCP_DELACK_MIN value means that even after a full sized segment was received, the ACK continues to be delayed until 40 ms have passed?

    Read the article

  • Looking for advice on Hyper-v storage replication

    - by Notre1
    I am designing a 2-host Hyper-V R2 cluster with 6-10 guests stored on a SMB iSCSI SAN device (probably Promise VessRAID). I will be getting at least two of the SAN devices and need to eliminate the storage a single point of failure. Ideally, that would involve real-time failover for the storage, like the Windows failover clustering does for the hosts. This design will be used at around six of our sites, and I would like to allow for us to eventually setup a cluster at colocation site and replicate each site's VMs there for DR. (Ideally a live multi-site cluster, but a manual import of the VMs would be fine for this sort of DR.) The tools that come with enterprise SANs, like EMC and NetApp, seem to be the most commonly used items for a Hyper-V cluster, but I can't afford their prices with my budget. Outside of them, the two tools that seem to be most common for Hyper-V storage replication are SteelEye (now SIOS) DataKeeper Cluster Edition and Double-Take Availability. Originally, I was planning on using Clustered Shared Volume(s) (CSV), but it seems like replication support for these is either not available or brand new in both these products. It looks like CSVs are supported in Double-Take 5.22, see this discussion, but I don't think I want to run something that new in production. Right now, it seems like the best option for me is not to implement CSVs, implement some sort of storage replication, and upgrade to CSVs at a later date once replicating them is more mature. I would love to have live migration, and CSVs are not required for live migration if you are using one LUN per VM, so I guess this is what I'll do. I would prefer to stick to the using the Microsoft Windows Server and Hyper-V tools and features as much as possible. From that standpoint, SteelEye looks more appealing than Double-Take because they make the DataKeeper volume(s) available to the Failover Clustering Manager and then failover clustering is all configured and managed through the native Microsoft tools. Double-Take says that "clustered Hyper-V hosts are not supported," and Double-Take Availability itself seems to be what is used for the actual clustering and failover. Does anyone know if any of these replication tools work with more than two hosts in the cluster? All the information I can find on the web only uses two hosts in their examples. Are there any better tools than SteelEye and Double-Take for doing what I am trying to do, which is eliminate the storage as as single point of failure? Neverfail, AppAssure, and DataCore all seem to offer similar functionality, but they don't seems to be as popular as SteelEye and Double-Take. I have seen a number of people suggest using Starwind iSCSI SAN software for the shared storage, which includes replication (and CSV replication at that). There are a couple of reasons I have not seriously considered this route: 1) The company I work for is exclusively a Dell shop and Dell does not have any servers with that I can pack with more than six 3.5" SATA drives. 2) In the future, it could be advantegous for us to not be locked into a particular brand or type of storage and third-party replication softwares all allow replication to heterogeneous storage devices. I am pretty new to iSCSI and clustering, so please let me know if it looks like I am planning something that goes against best practices or overlooking/missing something.

    Read the article

  • linux static ip problem

    - by out_sider
    I have a serve running centOS and I'd like to have the following setup: The server needs to have a static ip and be accessible by name. What I want is independently of the network being able to connect the server to the network and access it by name on Firefox as it will be running a web page. I tried setting the ip manually but while the inet addr is set as I want it the Bcast isn't. And I only can ping the server on the Bcast ip and because it is 192.168.2.255 ssh says it's not valid.

    Read the article

  • Upgrade no raid server to raid

    - by AZee
    I have just learned that our PDC has a single drive with 2 partitions. I also know that this drive has bad blocks as recorded in the event log. What I would like to do is to convert this to a RAID solution with a nice balance between economy and performance. I will admit that I have only configured servers with RAID from scratch, and have no experience upgrading an existing system into a RAID system. In fact, I'm not sure it is even possible. Since this is the PDC for 350+ workstations downtime is important. I'd like to hear from other System Administators how they would tackle this and their recommendations for all devices. At this time it seems to me that I can replace the existing drive and then restore from backup or install a controller, drives, configure the RAID an basically start from scratch. Thank you for taking your time. ~AZee

    Read the article

  • unzip and maintain directory structure or archives

    - by Ramy
    On fedora-13, I tried using: unzip -j [nameof.zip] but this doesn't seem to maintain the folder structure of the original archive. I REALLY need to maintain this structure because the archive is a backup of all my m4a's which are being converted to mp3. If I just convert it as is, then i'll just have a single massive directory full of mp3's, but they won't be in their respective "artist" folder.

    Read the article

  • Setting differing ACLs on directories and files

    - by durandal
    Quick ACL question: I want to set up default permissions for a file share so that everyone can rwx all of the directories and so that all newly created files are rw. Everyone who is accessing this share is in the same group, so this isn't a concern. I have looked at doing this via ACLs without changing all of the users' umasks and such. Here are my current invocations: setfacl -Rdm g:mygroup:rwx share_name setfacl -Rm g:mygroup:rwx share_name My problem is that while I want all of the newly created sub-directories to be rwx, I only want newly created files to be rw. Does anyone have a better method to achieve my desired end-result? Is there some way to set ACLs on directories separately from files, in a similar vein to "chmod +x" vs. "chmod +X"? Thanks

    Read the article

  • Doing "text mode 'splash' game" during boot.

    - by Vi
    Sometimes I want to do something (for example, playing a simple text-mode game) while the system is booting up. This is especially useful when lengthy reiserfs transaction replays are happening. Current hacky way of doing it is: Put the program on initramfs. Before running /sbin/init, "openvt 2 /my/program". Turn off messages from kernel (sysrq 0) Override /dev/console with /dev/null (to prevent boot messages). The problems are: There are STILL some messages interfering with program output. I can't see boot messages by switching to that virtual terminal back. After finishing the boot sequence, /dev/tty2 ends up being attached both to getty and my program. How to do it properly without of running graphical splashes? The system is Linux Debian Squeeze, no dependency based sysv scripts.

    Read the article

  • Windows XP: Make Google Chrome's minimize, restore and close buttons match other programs?

    - by TRiG
    I like the way Google Chrome puts the tabs above the address bar, but I don't like the way the minimize, restore, close buttons are a different shape to every other program's. It means that if I sit the mouse in the top corner and minimize everything, I find that I've restored Chrome, not minimized it. Is there any way to get these buttons to a normal shape and size? That's Firefox in front, looking normal, like every other program, and Chrome above and behind, with the buttons at an off-standard position and size.

    Read the article

  • Video card not detected in POST on initial boot.

    - by Jeff M
    I have a minor problem with my desktop computer after cleaning it out for dust. When I first boot up the computer, the video card does not get detected so I can't see anything. In POST, I'm getting the "can't detect video card" beeps. The boot sequence continues normally, just without video. However, if I restart it (using the restart button) anytime after POST, it would boot up normally. I have no reason to think that the motherboard, video card or PSU got damaged in the process. It was working fine before, works fine after resetting. Took all the necessary precautions while cleaning. On the initial boot, I can hear the video card's fan power up but immediately power down and try again one more time only to fail. After the beep, resetting gets everything running and sounding normally. I've reseated the card a couple of times and reset the BIOS but doesn't seem to help. I'm hoping I won't have to take it out and remove and reinstall everything again. Does anyone recognize these symptoms to know exactly what the problem is? My guess is that the video card isn't getting enough juice initially to be running stable to be detected. I just don't know what I did (or didn't do) to get it to be in this state. It's not a high priority thing for me at the moment, just means I have to always reset it after initially turning it on but will eventually remove everything and reinstall if it comes to that. I don't think the specs are relevant here but just in case, here's the relevant stuff: Motherboard: Gigabyte P35-DS3P Video: EVGA GeForce 8600 GTS PSU: Antec True Power Trio 650W Built ~2 years ago, still running well

    Read the article

  • What NAS setup for syncing over the internet?

    - by Jamse
    I have family living a few hours away and have a lot of files that I would like to share - especially lots of folders of digital photos, but also documents etc. - partially so they can see them, partially so I can have access when I visit them and partially for backup / redundancy purposes. My current hard drives on my main machine are getting pretty full anyway, and I have a MythTV box where my music is currently stored, so I was thinking of getting a NAS anyway. And at the other end my family have a few computers, so they would probably benefit from a NAS too. My general idea (though I'm willing to shift on this if there are any bright ideas about other ways of achieving my objectives) is to get a matching pair of NASs and have them sync over the internet. (To cut down on bandwidth use I would get them in sync locally to start with.) Having read around as best I can it seems that syncing over the internet is generally only a feature on quite high end units. However, I have seen that QNAP seem to feature this on their TS-110 and TS-210 units, which might work (they call it "remote replication"). They seem pretty reasonably priced for what they are, but of course with buying 2 of them and then adding the drives (say 1TB or 2TB each) I'd be looking at about £400 total. So, I'm looking for recommendations really. I don't want to spend more than the QNAPs would cost me, but any other ideas would be most appreciated. I am comfortable with technology and tinkering around, but I don't have as much time for that as I would like, so I guess I would favour solutions that require less tinkering rather than more (even though that's less fun!). Any thoughts would be welcome, as would any comments from people who have used the QNAP boxes for this. Thanks in advance. Some specifications: Two-way syncing. Changes made at either end should be synced to the other. There shouldn't be one unit that is effectively a read-only mirror of the other. Not real time. The syncing doesn't need to be real time - if it updated, say, daily overnight that would be fine. Set and forget. I would prefer minimal user interaction once set up - it would be great if syncs were scheduled and automatic. OS independence. I am running Windows XP plus an Ubuntu-based MythTV box. At the other end there are Windows 7 and Windows XP machines, plus a networked TV set top box which I think can play files off the network. Machine independence. I would favour a system that is self-contained, i.e. not reliant on any particular PC being switched on. If the system had enough else going for it I could perhaps work around it at this end, where I only have one PC that's used as such, but it would be harder at the other where there are at least two PCs that might be accessing the files. Notifications. I guess things like getting an email notification if the syncing fell over for any reason would be useful, though it's not a deal breaker.

    Read the article

  • SQL SERVER – Public Training and Private Training – Differences and Similarities

    - by pinaldave
    Earlier this year, I was on Road SQL Server Seminars. I did many SQL Server Performance Trainings and SQL Server Performance Consultations throughout the year but I feel the most rewarding exercise is always the one when instructor learns something from students, too. I was just talking to my wife, Nupur – she manages my logistics and administration related activities – and she pointed out that this year I have done 62% consultations and 38% trainings. I was bit surprised as I thought the numbers would be reversed. Every time I review the year, I think of training done at organizations. Well, I cannot argue with reality, I have done more consultations (some would call them projects) than training. I told my wife that I enjoy consultations more than training. She promptly asked me a question which was not directly related but made me think for long time, and in the end resulted in this blog post. Nupur asked me: what do I enjoy the most, public training or private training? I had a long conversation with her on this subject. I am not going to write long blog post which can change your life here. This is rather a small post condensing my one hour discussion into 200 words. Public Training is fun because… There are lots of different kinds of attendees There are always vivid questions Lots of questions on questions Less interest in theory and more interest in demos Good opportunity of future business Private Training is fun because… There is a focused interest One question is discussed deeply because of existing company issues More interest in “how it happened” concepts – under the hood operations Good connection with attendees This is also a good opportunity of future business Here I will stop my monologue and I want to open up this question to all of you: Question to Attendees - Which one do you enjoy the most – Public Training or Private Training? Question to Trainers - What do you enjoy the most – Public Training or Private Training? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQL Training, SQLAuthority News, T SQL, Technology

    Read the article

  • Separating an Array into a comma seperated string with quotes

    - by user548744
    I'm manually building an SQL query where I'm using an Array in the params hash for an SQL IN statement, like: ("WHERE my_field IN('blue','green','red')"). So I need to take the contents of the array and output them into a string where each element is single quoted and comma seperated (and with no ending comma). So if the array was: my_array = ['blue','green','red'] I'd need a string that looked like: "'blue','green','red'" I'm pretty new to Ruby/Rails but came up with something that worked: if !params[:colors].nil? @categories_array = params[:colors][:categories] @categories_string ="" for x in @categories_array @categories_string += "'" + x + "'," end @categories_string.chop! #remove the last comma end So, I'm good but curious as to what a proper and more consise way of doing this would look like? Thanks

    Read the article

  • Theory: Can JIT Compiler be used to parse the whole program first, then execute later?

    - by unknownthreat
    Normally, JIT Compiler works by reads the byte code, translate it into machine code, and execute it. This is what I understand, but in theory, is it possible to make the JIT Compiler parses the whole program first, then execute the program later as machine code? I do not know how JIT Compiler works technically and exactly, so I don't know any feasibility in this case. But theoretically, is it possible? Or am I doing it wrong?

    Read the article

  • Compiling linux library for mingw32

    - by TheFuzz
    I have been using a socket library for C++. Some other info: 32 bit Linux, Codelite and GCC toolset. I want to be able to compile my program for Windows using the windows edition of Codelite. The socket library I have been using doesn’t have a mingw32 build of the library, but it’s open source. So how can I make a mingw32 build of the socket library so I can make a windows build using the source provided?

    Read the article

  • Tool for response time analysis on JBoss server?

    - by Ariel Vardi
    I am running a pretty high traffic cluster of JBoss servers serving REST requests and I am interested in tools reading the access logs in Tomcat format (with %D parameter) to provide a detailed analysis of the response time on a per-call basis. Ideally this tool would generate a chart showing the progression of the response time throughout the day, hour per hour, then a weekly view with averages on the day, and monthly with average on the weeks (CACTI style). I've looked for such tools and couldn't find anything. Is any of you guys aware of something close to that before I start writing my own? I haven't looked into CACTI extensions yet, but that be an option?

    Read the article

  • How to structure Python package that contains Cython code

    - by Craig McQueen
    I'd like to make a Python package containing some Cython code. I've got the the Cython code working nicely. However, now I want to know how best to package it. For most people who just want to install the package, I'd like to include the .c file that Cython creates, and arrange for setup.py to compile that to produce the module. Then the user doesn't need Cython installed in order to install the package. But for people who may want to modify the package, I'd also like to provide the Cython .pyx files, and somehow also allow for setup.py to build them using Cython (so those users would need Cython installed). How should I structure the files in the package to cater for both these scenarios? The Cython documentation gives a little guidance. But it doesn't say how to make a single setup.py that handles both the with/without Cython cases.

    Read the article

  • How to display a busy message over a wpf screen

    - by dave
    Hey, I have a WPF application based on Prism4. When performing slow operations, I want to show a busy screen. I will have a large number of screens, so I'm trying to build a single solution into the framework rather than adding the busy indicator to each screen. These long running operations run in a background thread. This allows the UI to be updated (good) but does not stop the user from using the UI (bad). What I'd like to do is overlay a control with a spinning dial sort of thing and have that control cover the entire screen (the old HTML trick with DIVs). When the app is busy, the control would display thus block any further interaction as well as showing the spinny thing. To set this up, I thought I could just have my app screen in a canvas along with the spinny thing (with a greater ZIndex) then just make the spinny thing visible as required. This, however, is getting hard. Canvases do not seem well set up for this and I think I might be barking up the wrong tree. I would appreciate any help. Thanks.

    Read the article

  • jQuery Ajax / .each callback, next 'each' firing before ajax completed

    - by StuR
    Hi the below Javascript is called when I submit a form. It first splits a bunch of url's from a text area, it then: 1) Adds lines to a table for each url, and in the last column (the 'status' column) it says "Not Started". 2) Again it loops through each url, first off it makes an ajax call to check on the status (status.php) which will return a percentage from 0 - 100. 3) In the same loop it kicks off the actual process via ajax (process.php), when the process has completed (bearing in the mind the continuous status updates), it will then say "Completed" in the status column and exit the auto_refresh. 4) It should then go to the next 'each' and do the same for the next url. function formSubmit(){ var lines = $('#urls').val().split('\n'); $.each(lines, function(key, value) { $('#dlTable tr:last').after('<tr><td>'+value+'</td><td>Not Started</td></tr>'); }); $.each(lines, function(key, value) { var auto_refresh = setInterval( function () { $.ajax({ url: 'status.php', success: function(data) { $('#dlTable').find("tr").eq(key+1).children().last().replaceWith("<td>"+data+"</td>"); } }); }, 1000); $.ajax({ url: 'process.php?id='+value, success: function(msg) { clearInterval(auto_refresh); $('#dlTable').find("tr").eq(key+1).children().last().replaceWith("<td>completed rip</td>"); } }); }); }

    Read the article

  • How to optimize this mysql query - explain output included

    - by Sandeepan Nath
    This is the query (a search query basically, based on tags):- select SUM(DISTINCT(ttagrels.id_tag in (2105,2120,2151,2026,2046) )) as key_1_total_matches, td.*, u.* from Tutors_Tag_Relations AS ttagrels Join Tutor_Details AS td ON td.id_tutor = ttagrels.id_tutor JOIN Users as u on u.id_user = td.id_user where (ttagrels.id_tag in (2105,2120,2151,2026,2046)) group by td.id_tutor HAVING key_1_total_matches = 1 And following is the database dump needed to execute this query:- CREATE TABLE IF NOT EXISTS `Users` ( `id_user` int(10) unsigned NOT NULL auto_increment, `id_group` int(11) NOT NULL default '0', PRIMARY KEY (`id_user`), KEY `Users_FKIndex1` (`id_group`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=730 ; INSERT INTO `Users` (`id_user`, `id_group`) VALUES (303, 1); CREATE TABLE IF NOT EXISTS `Tutor_Details` ( `id_tutor` int(10) unsigned NOT NULL auto_increment, `id_user` int(10) NOT NULL default '0', PRIMARY KEY (`id_tutor`), KEY `Users_FKIndex1` (`id_user`), KEY `id_user` (`id_user`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=58 ; INSERT INTO `Tutor_Details` (`id_tutor`, `id_user`) VALUES (26, 303); CREATE TABLE IF NOT EXISTS `Tags` ( `id_tag` int(10) unsigned NOT NULL auto_increment, `tag` varchar(255) default NULL, PRIMARY KEY (`id_tag`), UNIQUE KEY `tag` (`tag`), KEY `id_tag` (`id_tag`), KEY `tag_2` (`tag`), KEY `tag_3` (`tag`), KEY `tag_4` (`tag`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=2957 ; INSERT INTO `Tags` (`id_tag`, `tag`) VALUES (2026, 'Brendan.\nIn'), (2046, 'Brendan.'), (2105, 'Brendan'), (2120, 'Brendan''s'), (2151, 'Brendan)'); CREATE TABLE IF NOT EXISTS `Tutors_Tag_Relations` ( `id_tag` int(10) unsigned NOT NULL default '0', `id_tutor` int(10) unsigned default NULL, `tutor_field` varchar(255) default NULL, `cdate` timestamp NOT NULL default CURRENT_TIMESTAMP, `udate` timestamp NULL default NULL, KEY `Tutors_Tag_Relations` (`id_tag`), KEY `id_tutor` (`id_tutor`), KEY `id_tag` (`id_tag`), KEY `id_tutor_2` (`id_tutor`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; INSERT INTO `Tutors_Tag_Relations` (`id_tag`, `id_tutor`, `tutor_field`, `cdate`, `udate`) VALUES (2105, 26, 'firstname', '2010-06-17 17:08:45', NULL); ALTER TABLE `Tutors_Tag_Relations` ADD CONSTRAINT `Tutors_Tag_Relations_ibfk_2` FOREIGN KEY (`id_tutor`) REFERENCES `Tutor_Details` (`id_tutor`) ON DELETE NO ACTION ON UPDATE NO ACTION, ADD CONSTRAINT `Tutors_Tag_Relations_ibfk_1` FOREIGN KEY (`id_tag`) REFERENCES `Tags` (`id_tag`) ON DELETE NO ACTION ON UPDATE NO ACTION; What the query does? This query actually searches tutors which contain "Brendan"(as their name or biography or something). The id_tags 2105,2120,2151,2026,2046 are nothing but the tags which are LIKE "%Brendan%". My question is :- 1.In the explain of this query, the reference column shows NULL for ttagrels, but there are possible keys (Tutors_Tag_Relations,id_tutor,id_tag,id_tutor_2). So, why is no key being taken. How to make the query take references. Is it possible at all? 2. The other two tables td and u are using references. Any indexing needed in those? I think not. Check the explain query output here http://www.test.examvillage.com/explain.png

    Read the article

  • How can I ensure my programmatic uploads are done in the correct order?

    - by ccomet
    In our application, we store two copies of a file - an approved one and an unapproved one. Both track their versions separately. When the unapproved is then approved, all of its versions are added as new versions to the approved file. To do this properly, my code has to upload each version separately into the approved folder, and update the item each time with that version's information. For some reason, though, this doesn't always work properly. In my latest scenario, the latest version was uploaded first, and then all of the remaining versions were uploaded afterwards. However, my code explicitly is supposed to upload the other versions first, that's the order I wrote it in. Why is this happening? And if it is possible, how do I ensure that the versions are uploaded in the correct order? Clarification - It's not a problem with the enumeration - I'm getting the previous versions in the correct order. What is happening is that the final version, which is written after the loop, is being uploaded before the loop. Which really doesn't make any sense to me. Here's a condensed version of the relevant code. //These three are initialized earlier in the code. SPList list; //The document library SPListItem item; //The list item in the Unapproved folder int AID; //The item id of the corresponding item in the Approved folder. byte[] contents; //Not initialized. /* These uploads are happening second when they should happen first. */ if (item.File.Versions.Count > 0) { //This loop is actually a separate method call if that matters. //For simplicity I expanded it here. foreach (SPFileVersion fVer in item.File.Versions) { if (!fVer.IsCurrentVersion) { contents = fVer.OpenBinary(); SPFile fSub = aFolder.Files.Add(fVer.File.Name, contents, u1, fVer.CreatedBy, dt1, fVer.Created); SPListItem subItem = list.GetItemById(AID); //This method updates the newly uploaded version with the field data of that version. UpdateFields(item.Versions.GetVersionFromLabel(fVer.VersionLabel), subItem); } } } /* This upload happens first when it should happen last. */ //Does the same as earlier loop, but for the final version. contents = item.File.OpenBinary(); SPFile f = aFolder.Files.Add(item.File.Name, contents, u1, u2, dt1, dt2); SPListItem finalItem = list.GetItemById(AID); UpdateFields(item.Versions[0], finalItem); item.Delete();

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >