Search Results

Search found 16 results on 1 pages for 'myrddin emrys'.

Page 1/1 | 1 

  • Why can't I navigate Active Directory within Powershell?

    - by Myrddin Emrys
    I have an AD: drive, which should allow me to browse active directory from within Powershell. But when I try to use it, it will not let me navigate beyond the root. From what I have read the given commands should work, but they are failing. PS AD:\> ls Name ObjectClass DistinguishedName ---- ----------- ----------------- company domainDNS DC=company,DC=com Configuration configuration CN=Configuration,DC=company,DC=com Schema dMD CN=Schema,CN=Configuration,DC=company,DC=com ForestDnsZones domainDNS DC=ForestDnsZones,DC=company,DC=com DomainDnsZones domainDNS DC=DomainDnsZones,DC=company,DC=com PS AD:\> cd schema Set-Location : Cannot find path 'AD:\schema' because it does not exist. At line:1 char:3 + cd <<<< schema + CategoryInfo : ObjectNotFound: (AD:\schema:String) [Set-Location], ItemNotFoundException + FullyQualifiedErrorId : PathNotFound,Microsoft.PowerShell.Commands.SetLocationCommand PS AD:\> cd Schema Set-Location : Cannot find path 'AD:\Schema' because it does not exist. (duplicate of previous error) PS AD:\> cd company Set-Location : Cannot find path 'AD:\company' because it does not exist. (duplicate of previous error) PS AD:\> ls Schema Get-ChildItem : Cannot find path '//RootDSE/Schema' because it does not exist. (duplicate of previous error) PS AD:\> cd ForestDnsZones Set-Location : Cannot find path 'AD:\ForestDnsZones' because it does not exist. (duplicate of previous error)

    Read the article

  • How do I allow end user 'blocked sender lists' to work even if the server manages spam filtering?

    - by Myrddin Emrys
    Client filtering in Outlook 2010 is disabled when the server is managing spam filtering. Unfortunately I have a few high profile users that prefer to spam-block mailing lists rather than unsubscribe, so even though the email is not really spam they are upset that it is coming into their mailbox. As seen here, I am not the first person to wrestle with this issue, and the suggested fix there (setting New-FseExtendedOption –Name CFAllowBlockedSenders –Value true) also failed to work for me. Can anyone provide another possible fix? Thank you kindly.

    Read the article

  • A network share folder is invisible to users

    - by Myrddin Emrys
    I have a network share folder that I was recently cleaning up permissions to. I took off the four individual names from the access permissions to the folder, and added a new security group (Universal) with standard Read/Write permissions to that folder, then added those 4 people to the group. However... now nobody can see the folder. The users can see the other 9 folders in that shared drive, but the 10th is missing. I cannot see any security permission in the parent folder or in the folder itself which would cause it to be invisible to anyone, regardless of whether they have permission to open it or edit files within.

    Read the article

  • What's the easiest way to allow Exchange 2003 remote (no MSO client) users check their Mailbox size?

    - by Myrddin Emrys
    We are migrating from Exchange 2003 with no quota settings to Exchange 2010 with limited mailbox sizes. We are trying to get users to clean their mailboxes prior to the move to reduce the transfer load, as well as to comply with new quotas on the 2010 system. But many users access their mail through webmail only. I cannot see a way for users to access their mail store size in this manner. Has anyone else run into this problem? Is there a good way to easily let users check their own mailbox size? The only thing I've come up with as a workaround is a report that IT generates and mail-merge it out to users daily with their current mailbox size. This is cumbersome and time consuming compared to a way for them to check their own mailbox size however.

    Read the article

  • Improved ACL editor for Windows file permissions

    - by Myrddin Emrys
    I have recently been doing a lot of updates to our network drive permissions... such as consolidating direct user permissions into group permissions. The built-in ACL editor (Advanced Security Settings dialog) is adequate, but its limitations are frustrating, particularly that it cannot be resized and you cannot look at the list of existing entries at the same time as you are adding a new entry. Is there an improved ACL editor that can be downloaded to supplement the default one?

    Read the article

  • Internal Code Signing: Key Distribution, or Certificate Server?

    - by Myrddin Emrys
    I should first note that we have nobody in IT with significant familiarity with self-signed certification. We have a moderately sprawling network (one forest, many locations), and we are now rolling out internal code signing; until now users have run untrusted code, or we even disabled(!) the warnings. Intranet applications, scripts, and sites will now be signed with self certification. I am aware of two obvious ways we can deploy this: Distributing the keys directly via a group policy, and setting up a cert server. Can someone explain the trade-offs between these two methods? How many certs before the group policy method is unwieldy? Are they large enough that remote users will have issues? Does the group policy method distribute duplicates on every login? Is there a better method I am not aware of? I can find a lot of documentation on certifications and various ways to create them, but I have not been able to find something that summarizes the difference between the distribution methods and what criteria make one or the other superior.

    Read the article

  • Is it possible to open an Active Director or Exchange Management Console user dialog directly from Powershell?

    - by Myrddin Emrys
    I'd like to be able to launch either the AD user dialog, or the EMC mailbox dialog directly from a Powershell script to open a specific user. The workflow goes something to the effect of "Does everything look correct on this user? Y/N" to continuing on, or to bringing up the account to edit. There's no reason to completely duplicate the functionality of these dialogs. I don't mind requiring that EMC or ADU&C already be open before the script is run, if necessary.

    Read the article

  • css: Cross-browser, reflowing, top-to-bottom, multi-column lists

    - by Sai Emrys
    See http://cssfingerprint.com/about#stats. See also http://stackoverflow.com/questions/933645/multi-column-css-lists. I want a multi-column list that: uses no JS reflows on window size makes as many columns as fit the enclosing element therefore, does not require batching the list into manual column groups works in all browsers works for an arbitrary number of unknown-width (but single-line-height) elements makes each column fit the width of its (dynamic) contents does not create scrollbars or other overflow issues is sorted top to bottom where possible My code is currently: ul.multi, ol.multi { width: 100%; margin: 0; padding: 0; list-style: none; -moz-column-width: 12em; -webkit-column-width: 12em; column-width: 12em; -moz-column-gap: 1em; -webkit-column-gap: 1em; column-gap: 1em; } ul.multi li, ol.multi li { <!--[if IE]> float: left; <![endif]--> width: 20em; margin: 0; padding: 0; } Although this works okay, it has some problems: I have to guess the content width it is right-to-left in IE (though this is acceptable as a graceful degradation mode) it won't work at all in non-IE, non-Moz/Webkit/CSS3 browsers How can this be improved?

    Read the article

  • sql: Group by x,y,z; return grouped by x,y with lowest f(z)

    - by Sai Emrys
    This is for http://cssfingerprint.com I collect timing stats about how fast the different methods I use perform on different browsers, etc., so that I can optimize the scraping speed. Separately, I have a report about what each method returns for a handful of URLs with known-correct values, so that I can tell which methods are bogus on which browsers. (Each is different, alas.) The related tables look like this: CREATE TABLE `browser_tests` ( `id` int(11) NOT NULL AUTO_INCREMENT, `bogus` tinyint(1) DEFAULT NULL, `result` tinyint(1) DEFAULT NULL, `method` varchar(255) DEFAULT NULL, `url` varchar(255) DEFAULT NULL, `os` varchar(255) DEFAULT NULL, `browser` varchar(255) DEFAULT NULL, `version` varchar(255) DEFAULT NULL, `created_at` datetime DEFAULT NULL, `updated_at` datetime DEFAULT NULL, `user_agent` varchar(255) DEFAULT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=33784 DEFAULT CHARSET=latin1 CREATE TABLE `method_timings` ( `id` int(11) NOT NULL AUTO_INCREMENT, `method` varchar(255) DEFAULT NULL, `batch_size` int(11) DEFAULT NULL, `timing` int(11) DEFAULT NULL, `os` varchar(255) DEFAULT NULL, `browser` varchar(255) DEFAULT NULL, `version` varchar(255) DEFAULT NULL, `user_agent` varchar(255) DEFAULT NULL, `created_at` datetime DEFAULT NULL, `updated_at` datetime DEFAULT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=28849 DEFAULT CHARSET=latin1 (user_agent is broken down pre-insert into browser, version, and os from a small list of recognized values using regex; I keep the original user-agent string just in case.) I have a query like this that tells me the average timing for every non-bogus browser / version / method tuple: select c, avg(bogus) as bog, timing, method, browser, version from browser_tests as b inner join ( select count(*) as c, round(avg(timing)) as timing, method, browser, version from method_timings group by browser, version, method having c > 10 order by browser, version, timing ) as t using (browser, version, method) group by browser, version, method having bog < 1 order by browser, version, timing; Which returns something like: c bog tim method browser version 88 0.8333 184 reuse_insert Chrome 4.0.249.89 18 0.0000 238 mass_insert_width Chrome 4.0.249.89 70 0.0400 246 mass_insert Chrome 4.0.249.89 70 0.0400 327 mass_noinsert Chrome 4.0.249.89 88 0.0556 367 reuse_reinsert Chrome 4.0.249.89 88 0.0556 383 jquery Chrome 4.0.249.89 88 0.0556 863 full_reinsert Chrome 4.0.249.89 187 0.0000 105 jquery Chrome 5.0.307.11 187 0.8806 109 reuse_insert Chrome 5.0.307.11 123 0.0000 110 mass_insert_width Chrome 5.0.307.11 176 0.0000 231 mass_noinsert Chrome 5.0.307.11 176 0.0000 237 mass_insert Chrome 5.0.307.11 187 0.0000 314 reuse_reinsert Chrome 5.0.307.11 187 0.0000 372 full_reinsert Chrome 5.0.307.11 12 0.7500 82 reuse_insert Chrome 5.0.335.0 12 0.2500 102 jquery Chrome 5.0.335.0 [...] I want to modify this query to return only the browser/version/method with the lowest timing - i.e. something like: 88 0.8333 184 reuse_insert Chrome 4.0.249.89 187 0.0000 105 jquery Chrome 5.0.307.11 12 0.7500 82 reuse_insert Chrome 5.0.335.0 [...] How can I do this, while still returning the method that goes with that lowest timing? I could filter it app-side, but I'd rather do this in mysql since it'd work better with my caching.

    Read the article

  • Student's t distribution in JavaScript

    - by Sai Emrys
    Google Spreadsheets currently does not support the standard function TDIST - i.e. the Student's t-distribution. This function is critical for calculating p-values. It seems that this is related to the fact that no integral-using functions (AFAICT) are implemented either. However, Google Docs allows people to add and publish their own scripts, in JavaScript. So ideally we should have something like: function tdist(t_value, degrees_of_freedom, two_tailed [defaults true]) {...} Anyone know of either an extant implementation of this (my google-fu has not turned up one, but may be weaker than yours) or a good idea for how to do it? I'd like to publish this together with some other useful functions that are currently calculable but a bit of a pain (like Student's t-test itself). Thanks!

    Read the article

  • ai: Determining what tests to run to get most useful data

    - by Sai Emrys
    This is for http://cssfingerprint.com I have a system (see about page on site for details) where: I need to output a ranked list, with confidences, of categories that match a particular feature vector the binary feature vectors are a list of site IDs & whether this session detected a hit feature vectors are, for a given categorization, somewhat noisy (sites will decay out of history, and people will visit sites they don't normally visit) categories are a large, non-closed set (user IDs) my total feature space is approximately 50 million items (URLs) for any given test, I can only query approx. 0.2% of that space I can only make the decision of what to query, based on results so far, ~10-30 times, and must do so in <~100ms (though I can take much longer to do post-processing, relevant aggregation, etc) getting the AI's probability ranking of categories based on results so far is mildly expensive; ideally the decision will depend mostly on a few cheap sql queries I have training data that can say authoritatively that any two feature vectors are the same category but not that they are different (people sometimes forget their codes and use new ones, thereby making a new user id) I need an algorithm to determine what features (sites) are most likely to have a high ROI to query (i.e. to better discriminate between plausible-so-far categories [users], and to increase certainty that it's any given one). This needs to take into balance exploitation (test based on prior test data) and exploration (test stuff that's not been tested enough to find out how it performs). There's another question that deals with a priori ranking; this one is specifically about a posteriori ranking based on results gathered so far. Right now, I have little enough data that I can just always test everything that anyone else has ever gotten a hit for, but eventually that won't be the case, at which point this problem will need to be solved. I imagine that this is a fairly standard problem in AI - having a cheap heuristic for what expensive queries to make - but it wasn't covered in my AI class, so I don't actually know whether there's a standard answer. So, relevant reading that's not too math-heavy would be helpful, as well as suggestions for particular algorithms. What's a good way to approach this problem?

    Read the article

  • CSS/JavaScript/hacking: Detect :visited styling on a link *without* checking it directly OR do it fa

    - by Sai Emrys
    This is for research purposes on http://cssfingerprint.com Consider the following code: <style> div.csshistory a { display: none; color: #00ff00;} div.csshistory a:visited { display: inline; color: #ff0000;} </style> <div id="batch" class="csshistory"> <a id="1" href="http://foo.com">anything you want here</a> <a id="2" href="http://bar.com">anything you want here</a> [etc * ~2000] </div> My goal is to detect whether foo has been rendered using the :visited styling. I want to detect whether foo.com is visited without directly looking at $('1').getComputedStyle (or in Internet Explorer, currentStyle), or any other direct method on that element. The purpose of this is to get around a potential browser restriction that would prevent direct inspection of the style of visited links. For instance, maybe you can put a sub-element in the <a> tag, or check the styling of the text directly; etc. Any method that does not directly or indierctly rely on $('1').anything is acceptable. Doing something clever with the child or parent is probably necessary. Note that for the purposes of this point only, the scenario is that the browser will lie to JavaScript about all properties of the <a> element (but not others), and that it will only render color: in :visited. Therefore, methods that rely on e.g. text size or background-image will not meet this requirement. I want to improve the speed of my current scraping methods. The majority of time (at least with the jQuery method in Firefox) is spent on document.body.appendChild(batch), so finding a way to improve that call would probably most effective. See http://cssfingerprint.com/about and http://cssfingerprint.com/results for current speed test results. The methods I am currently using can be seen at http://github.com/saizai/cssfingerprint/blob/master/public/javascripts/history_scrape.js To summarize for tl;dr, they are: set color or display on :visited per above, and check each one directly w/ getComputedStyle put the ID of the link (plus a space) inside the <a> tag, and using jQuery's :visible selector, extract only the visible text (= the visited link IDs) FWIW, I'm a white hat, and I'm doing this in consultation with the EFF and some other fairly well known security researchers. If you contribute a new method or speedup, you'll get thanked at http://cssfingerprint.com/about (if you want to be :-P), and potentially in a future published paper. ETA: The bounty will be rewarded only for suggestions that can, on Firefox, avoid the hypothetical restriction described in point 1 above, or perform at least 10% faster, on any browser for which I have sufficient current data, than my best performing methods listed in the graph at http://cssfingerprint.com/about In case more than one suggestion fits either criterion, the one that does best wins.

    Read the article

  • RockBand-like voice app for PC/OSX / Real time pitch display software

    - by Sai Emrys
    I played Rock Band 2 for the first time a little while ago (at Notacon). One thing I enjoyed about it was getting real-time feedback about my singing. I think it'd be neat to have something like that to run alongside my usual music, so that I can sing to random stuff in my music collection and know when I'm hitting the notes. Is there something like this for PC - ideally for OSX, and ideally that can just operate on arbitrary songs? I don't really care if it's game-like (though that's neat too); I just want it for the singing feedback. And I have no need for pitch correction - ideally what I'd see is just the pitches of the notes in the music and (on the same scale, differently displayed) of the live microphone. I tried to STFW but got no salient hits. :-/ Thanks!

    Read the article

  • mysql: Average over multiple columns in one row, ignoring nulls

    - by Sai Emrys
    I have a large table (of sites) with several numeric columns - say a through f. (These are site rankings from different organizations, like alexa, google, quantcast, etc. Each has a different range and format; they're straight dumps from the outside DBs.) For many of the records, one or more of these columns is null, because the outside DB doesn't have data for it. They all cover different subsets of my DB. I want column t to be their weighted average (each of a..f have static weights which I assign), ignoring null values (which can occur in any of them), except being null if they're all null. I would prefer to do this with a simple SQL calculation, rather than doing it in app code or using some huge ugly nested if block to handle every permutation of nulls. (Given that I have an increasing number of columns to average over as I add in more outside DB sources, this would be exponentially more ugly and bug-prone.) I'd use AVG but that's only for group by, and this is w/in one record. The data is semantically nullable, and I don't want to average in some "average" value in place of the nulls; I want to only be counting the columns for which data is there. Is there a good way to do this? Ideally, what I want is something like UPDATE sites SET t = AVG(a*a_weight,b*b_weight,...) where any null values are just ignored and no grouping is happening.

    Read the article

  • mysql/algorithm: Weighting an average to accentuate differences from the mean

    - by Sai Emrys
    This is for a new feature on http://cssfingerprint.com (see /about for general info). The feature looks up the sites you've visited in a database of site demographics, and tries to guess what your demographic stats are based on that. All my demgraphics are in 0..1 probability format, not ratios or absolute numbers or the like. Essentially, you have a large number of data points that each tend you towards their own demographics. However, just taking the average is poor, because it means that by adding in a lot of generic data, the number goes down. For example, suppose you've visited sites S0..S50. All except S0 are 48% female; S0 is 100% male. If I'm guessing your gender, I want to have a value close to 100%, not just the 49% that a straight average would give. Also, consider that most demographics (i.e. everything other than gender) does not have the average at 50%. For example, the average probability of having kids 0-17 is ~37%. The more a given site's demographics are different from this average (e.g. maybe it's a site for parents, or for child-free people), the more it should count in my guess of your status. What's the best way to calculate this? For extra credit: what's the best way to calculate this, that is also cheap & easy to do in mysql?

    Read the article

  • Change a finder method w/ parameters to an association

    - by Sai Emrys
    How do I turn this into a has_one association? (Possibly has_one + a named scope for size.) class User < ActiveRecord::Base has_many :assets, :foreign_key => 'creator_id' def avatar_asset size = :thumb # The LIKE is because it might be a .jpg, .png, or .gif. # More efficient methods that can handle that are OK. ;) self.assets.find :first, :conditions => ["thumbnail = '#{size}' and filename LIKE ?", self.login + "_#{size}.%"] end end EDIT: Cuing from AnalogHole on Freenode #rubyonrails, we can do this: has_many :assets, :foreign_key => 'creator_id' do def avatar size = :thumb find :first, :conditions => ["thumbnail = ? and filename LIKE ?", size.to_s, proxy_owner.login + "_#{size}.%"] end end ... which is fairly cool, and makes syntax a bit better at least. However, this still doesn't behave as well as I would like. Particularly, it doesn't allow for further nice find chaining (such that it doesn't execute this find until it's gotten all its conditions). More importantly, it doesn't allow for use in an :include. Ideally I want to do something like this: PostsController def show post = Post.get_cache(params[:id]) { Post.find(params[:id], :include => {:comments => {:users => {:avatar_asset => :thumb}} } ... end ... so that I can cache the assets together with the post. Or cache them at all, really - e.g. get_cache(user_id){User.find(user_id, :include => :avatar_assets)} would be a good first pass. This doesn't actually work (self == User), but is correct in spirit: has_many :avatar_assets, :foreign_key => 'creator_id', :class_name => 'Asset', :conditions => ["filename LIKE ?", self.login + "_%"] (Also posted on Refactor My Code.)

    Read the article

1