Search Results

Search found 25075 results on 1003 pages for 'default trace'.

Page 262/1003 | < Previous Page | 258 259 260 261 262 263 264 265 266 267 268 269  | Next Page >

  • How do I pan the contents of a div when the right mouse button is dragged?

    - by Shaunwithanau
    I need a cross browser way of capturing the right mouse click, preventing the default context menu and making it where when the user drags the mouse they can pan the contents of a div. This is largely similar to Google maps in that they will grip the contents and drag to see what they want. No external libraries please. I am already capturing the events, and know that this will prevent default actions: if (evt.preventDefault) { evt.preventDefault(); } else { evt.returnValue = false; } But this doesn't prevent the context menu AFAIK. Edit: I really am unsure about how to prevent the context menu and what the best way to manipulate the scroll bars would be? examples would be great

    Read the article

  • bi-directional o2m/m2o beats uni-directional o2m in SQL efficiency?

    - by Henry
    Use these 2 persistent CFCs for example: // Cat.cfc component persistent="true" { property name="id" fieldtype="id" generator="native"; property name="name"; } // Owner.cfc component persistent="true" { property name="id" fieldtype="id" generator="native"; property name="cats" type="array" fieldtype="one-to-many" cfc="cat" cascade="all"; } When one-to-many (unidirectional) Note: inverse=true on unidirectional will yield undesired result: insert into cat (name) values (?) insert into Owner default values update cat set Owner_id=? where id=? When one-to-many/many-to-one (bi-directional, inverse=true on Owner.cats): insert into Owner default values insert into cat (name, ownerId) values (?, ?) Does that mean setting up bi-directional o2m/m2o relationship is preferred 'cause the SQL for inserting the entities is more efficient?

    Read the article

  • Xcache - No different after using it

    - by Charles Yeung
    Hi I have installed Xcache in my site(using xampp), I have tested more then 10 times on several page and the result is same as default(no any cache installed), is it something wrong with the configure? Updated [xcache-common] ;; install as zend extension (recommended), normally "$extension_dir/xcache.so" zend_extension = /usr/local/lib/php/extensions/non-debug-non-zts-xxx/xcache.so zend_extension_ts = /usr/local/lib/php/extensions/non-debug-zts-xxx/xcache.so ;; For windows users, replace xcache.so with php_xcache.dll zend_extension_ts = C:\xampp\php\ext\php_xcache.dll ;; or install as extension, make sure your extension_dir setting is correct ; extension = xcache.so ;; or win32: ; extension = php_xcache.dll [xcache.admin] xcache.admin.enable_auth = On xcache.admin.user = "mOo" ; xcache.admin.pass = md5($your_password) xcache.admin.pass = "" [xcache] ; ini only settings, all the values here is default unless explained ; select low level shm/allocator scheme implemenation xcache.shm_scheme = "mmap" ; to disable: xcache.size=0 ; to enable : xcache.size=64M etc (any size > 0) and your system mmap allows xcache.size = 60M ; set to cpu count (cat /proc/cpuinfo |grep -c processor) xcache.count = 1 ; just a hash hints, you can always store count(items) > slots xcache.slots = 8K ; ttl of the cache item, 0=forever xcache.ttl = 0 ; interval of gc scanning expired items, 0=no scan, other values is in seconds xcache.gc_interval = 0 ; same as aboves but for variable cache xcache.var_size = 4M xcache.var_count = 1 xcache.var_slots = 8K ; default ttl xcache.var_ttl = 0 xcache.var_maxttl = 0 xcache.var_gc_interval = 300 xcache.test = Off ; N/A for /dev/zero xcache.readonly_protection = Off ; for *nix, xcache.mmap_path is a file path, not directory. ; Use something like "/tmp/xcache" if you want to turn on ReadonlyProtection ; 2 group of php won't share the same /tmp/xcache ; for win32, xcache.mmap_path=anonymous map name, not file path xcache.mmap_path = "/dev/zero" ; leave it blank(disabled) or "/tmp/phpcore/" ; make sure it's writable by php (without checking open_basedir) xcache.coredump_directory = "" ; per request settings xcache.cacher = On xcache.stat = On xcache.optimizer = Off [xcache.coverager] ; per request settings ; enable coverage data collecting for xcache.coveragedump_directory and xcache_coverager_start/stop/get/clean() functions (will hurt executing performance) xcache.coverager = Off ; ini only settings ; make sure it's readable (care open_basedir) by coverage viewer script ; requires xcache.coverager=On xcache.coveragedump_directory = "" Thanks you

    Read the article

  • Suppress/Override application.html.erb

    - by user310628
    Is there any way to suppress the default js and css loaded by application.html.erb on a view by view basis? I'm finding it incredibly difficult to manage a global css and js includes configuration when certain views need different js libraries that for some reason seem to be conflicting with one another. For example, I might want jquery 1.3 for one view and 1.4.2 for another. I don't necessarily want to have to have an include for every view (I do want to have a global site-wide default), but I would also like to be able to override those for any view I want. Thanks!

    Read the article

  • Sharepoint Blog Posts : Sort based on Published date

    - by Hojo
    I have a sharepoint intranet portal which has many blogs and they have customized design. We use default Data form webpart for displaying blog posts. By default the posts are sorted based on "created date". I have a new requirement from the client asking me to change the sorting criteria to "Published date". What is the easiest method to achieve this without using SharePoint Designer . Note: Creating a new view is not a solution as I will not be able to apply the customized design.

    Read the article

  • Cannot Extend Django 1.2.1 Admin Template

    - by jcady
    I am attempting to override/extend the header for the Django admin in version 1.2.1. However when I try to extend the admin template and simply change what I need documented here: http://docs.djangoproject.com/en/dev/ref/contrib/admin/#overriding-vs-replacing-an-admin-template), I run into a recursion problem. I have an index.html file in my project's templates/admin/ directory that starts with {% extends "admin/index.html" %} But it seems that this is referencing the local index file (a.k.a. itself) rather than the default Django copy. I want to extend the default Django template and simply change a few blocks. When I try this file, I get a recursion depth error. How can I extend parts of the admin? Thanks.

    Read the article

  • Does Resharper 4.1 support both Camel Humps and normal selection modes?

    - by Jonathan Parker
    I've found the setting for Camel Humps in resharper: Resharper - Options - Editor - Use CamelHumps The problem is that I would still like to be able to use the normal selection mode (i.e. the default behaviour for CTRL+Arrow and CTRL+SHIFT+Arrow) as well as the CamelHumps mode. For example consider this variable: private int MyVeryLongCamelCaseName; Now if I want to copy the entire variable then I want the VS default behaviour for CTRL+SHIFT+Left-Arrow which is to select the entire variable if the cursor is on the M. However if I want to change the name to say MyExtremelyLongCamelCaseName then I would like the CamelHumps behaviour provided by Resharper. Is there any way to have both behaviours with different shortcuts?

    Read the article

  • Drupal 6: display image with View in blog posts listing page...

    - by artmania
    Hi friends, I'm new at Drupal, love it so far :) I added Photo and Logo File field to blog entry with CCK. I need to display these images at blog post listing page. So at View Module, I added fields as below; At View: Content: Logo URL to file Content: Photo Path to file and it displays only names of files, but I need to display image. how can I make it with View Module? Blog Listing Page: Logo: http://blabla.com/drupal/sites/default/files/Logo_0.jpeg Photo: sites/default/files/photoname_0.jpeg Appreciate helps!!!! Thanks a lot!

    Read the article

  • Foreign keys in MySQL?

    - by icco
    I have been slowly learning SQL the last few weeks. I've picked up all of the relational algebra and the basics of how relational databases work. What I'm trying to do now is learn how it's implemented. A stumbling block I've come across in this, is foreign keys in MySQL. I can't seem to find much about the other than that they exist in the InnoDB storage schema that MySQL has. What is a simple example of foreign keys implemented in MySQL? Here's part of a schema I wrote that doesn't seem to be working if you would rather point out my flaw than show me a working example. CREATE TABLE `posts` ( `pID` bigint(20) NOT NULL auto_increment, `content` text NOT NULL, `time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, `uID` bigint(20) NOT NULL, `wikiptr` bigint(20) default NULL, `cID` bigint(20) NOT NULL, PRIMARY KEY (`pID`), Foreign Key(`cID`) references categories, Foreign Key(`uID`) references users ) ENGINE=InnoDB;

    Read the article

  • sort the "rollup" in group by

    - by shantanuo
    I found that the "with rollup" option used with group by is very useful. But it does not behave with "order by" clause. Is there any way to order by the way I want as well as calculate the sub-totals? CREATE TABLE `mygroup` ( `id` int(11) default NULL, `country` varchar(100) default NULL ) ENGINE=MyISAM ; INSERT INTO `mygroup` VALUES (1,'India'),(5,'India'),(8,'India'),(18,'China'),(28,'China'),(28,'China'); mysql>select country, sum(id) from mygroup group by country with rollup; +---------+---------+ | country | sum(id) | +---------+---------+ | China | 74 | | India | 14 | | NULL | 88 | +---------+---------+ 3 rows in set (0.00 sec) mysql>select country, sum(id) as cnt from mygroup group by country order by cnt ; +---------+------+ | country | cnt | +---------+------+ | India | 14 | | China | 74 | +---------+------+ 2 rows in set (0.00 sec) mysql>select country, sum(id) as cnt from mygroup group by country with rollup order by cnt; ERROR 1221 (HY000): Incorrect usage of CUBE/ROLLUP and ORDER BY Expected Result: +---------+------+ | country | cnt | +---------+------+ | India | 14 | | China | 74 | | NULL | 88 | +---------+---------+ 3 rows in set (0.00 sec)

    Read the article

  • OneToOne JPA / Hibernate eager loading cause N+1 select

    - by Alexandre Lavoie
    I created a method to have multilingual text on different objects without creating field for each languages or tables for each objects types. Now the only problem I've got is N+1 select queries when doing a simple loading. Tables schema : CREATE TABLE `testentities` ( `keyTestEntity` int(11) NOT NULL, `keyMultilingualText` int(11) NOT NULL, PRIMARY KEY (`keyTestEntity`) ) ENGINE=MyISAM AUTO_INCREMENT=0 DEFAULT CHARSET=utf8; CREATE TABLE `common_multilingualtexts` ( `keyMultilingualText` int(11) NOT NULL auto_increment, PRIMARY KEY (`keyMultilingualText`) ) ENGINE=MyISAM AUTO_INCREMENT=0 DEFAULT CHARSET=utf8; CREATE TABLE `common_multilingualtexts_values` ( `languageCode` varchar(5) NOT NULL, `keyMultilingualText` int(11) NOT NULL, `value` text, PRIMARY KEY (`languageCode`,`keyMultilingualText`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8; MultilingualText.java @Entity @Table(name = "common_multilingualtexts") public class MultilingualText implements Serializable { private Integer m_iKeyMultilingualText; private Map<String, String> m_lValues = new HashMap<String, String>(); public void setKeyMultilingualText(Integer p_iKeyMultilingualText) { m_iKeyMultilingualText = p_iKeyMultilingualText; } @Id @GeneratedValue @Column(name = "keyMultilingualText") public Integer getKeyMultilingualText() { return m_iKeyMultilingualText; } public void setValues(Map<String, String> p_lValues) { m_lValues = p_lValues; } @ElementCollection(fetch = FetchType.EAGER) @CollectionTable(name = "common_multilingualtexts_values", joinColumns = @JoinColumn(name = "keyMultilingualText")) @MapKeyColumn(name = "languageCode") @Column(name = "value") public Map<String, String> getValues() { return m_lValues; } public void put(String p_sLanguageCode, String p_sValue) { m_lValues.put(p_sLanguageCode,p_sValue); } public String get(String p_sLanguageCode) { if(m_lValues.containsKey(p_sLanguageCode)) { return m_lValues.get(p_sLanguageCode); } return null; } } And it is used like this on a object (having a foreign key to the multilingual text) : @Entity @Table(name = "testentities") public class TestEntity implements Serializable { private Integer m_iKeyEntity; private MultilingualText m_oText; public void setKeyEntity(Integer p_iKeyEntity) { m_iKeyEntity = p_iKeyEntity; } @Id @GeneratedValue @Column(name = "keyEntity") public Integer getKeyEntity() { return m_iKeyEntity; } public void setText(MultilingualText p_oText) { m_oText = p_oText; } @OneToOne(cascade = CascadeType.ALL) @JoinColumn(name = "keyText") public MultilingualText getText() { return m_oText; } } Now, when doing a simple HQL query : from TestEntity, I get a query selecting TestEntity's and one query for each MultilingualText that need to be loaded on each TestEntity. I've searched a lot and found absolutely no solutions. I have tested : @Fetch(FetchType.JOIN) optional = false @ManyToOne instead of @OneToOne Now I am out of idea!

    Read the article

  • asp.net mvc routing id parameter

    - by parminder
    Hi Experts, I am working on a website in asp.net mvc. I have a route routes.MapRoute( "Default", // Route name "{controller}/{action}/{id}", // URL with parameters new { controller = "Home", action = "Index", id = UrlParameter.Optional } // Parameter defaults ); which is the default route. Now I have method public ActionResult ErrorPage(int errorno) { return View(); } now if i want to run this code with http://something/mycontroller/Errorpage/1 it doesnt work. but if i change the parameter name to id from errorno it works. Is it cumpulsory to have same parameter name for this method. Or I need to create seperate routes for such situations. Regards Parminder

    Read the article

  • ASP.Net Response Filter Clashing with SharePoint 2010 Publishing Site Defaults

    - by Jason Weber
    Hello everyone, I'm debugging an HttpModule with an ASP.NET response filter. This dynamically rewrites portions of rendered SharePoint WCM pages. The publishing pages render fine in SP2007 on both Server 2003 and Server 2008. However the equivalent pages fail to render in SP2010 B2 on Server 2008 R2 / IIS7. The following error is returned by ASP.NET: Post cache substitution is not compatible with modules in the IIS integrated pipeline that modify the response buffers. Either a native module in the pipeline has modified an HTTP_DATA_CHUNK structure associated with a managed post cache substitution callback, or a managed filter has modified the response. This error is consistent with KB #2014472. However: Caching is disabled for anonymous & authenticated access at the site collection level There do not appear to be any Substitution controls on either the master or layout page The IIS 7 settings are all stock default This is happening e.g. on /pages/default.aspx. It seems likely I'm missing something cache related...but what?

    Read the article

  • MVC DateTime binding with incorrect date format

    - by Sam Wessel
    Asp.net-MVC now allows for implicit binding of DateTime objects. I have an action along the lines of public ActionResult DoSomething(DateTime startDate) { ... } This successfully converts a string from an ajax call into a DateTime. However, we use the date format dd/MM/yyyy; MVC is converting to MM/dd/yyyy. For example, submitting a call to the action with a string '09/02/2009' results in a DateTime of '02/09/2009 00:00:00', or September 2nd in our local settings. I don't want to roll my own model binder for the sake of a date format. But it seems needless to have to change the action to accept a string and then use DateTime.Parse if MVC is capable of doing this for me. Is there any way to alter the date format used in the default model binder for DateTime? Shouldn't the default model binder use your localisation settings anyway?

    Read the article

  • Session management with OpenID, in ASP.NET

    - by Andreas Grech
    I am currently playing with DotNetOpenAuth to make an ASP.NET (C#) website use OpenID instead of the normal login-password routine for user and session handling. Up till now, I have added the DotNetOpenAuth.dll into my project and tried a test login page with the following: <rp:OpenIdLogin ID="OpenIdLogin1" runat="server" /> When I run the page, I enter a valid myopenid url and the website redirects to the myopenid page, where I enter my password, and upon success, it returns back to my default.aspx, due to the following in my web.config: <authentication mode="Forms"> <forms defaultUrl="/Default.aspx" loginUrl="~/Login.aspx"/> </authentication> Now that the user is "logged in", how can handle my session? At the moment, I don't know how I can, for example, check if the session is still alive or how to terminate the session. My basic question is, how can I manage the session once the user is authenticated with OpenID ?

    Read the article

  • Optimize php-fpm and varnish for a powerfull server

    - by Jim
    My setup is: Intel® Core™ i7-2600 and RAM 16 GB DDR3 RAM varnish+nginx+php-fpm+apc for a not very heavy WordPress blog with W3 Total Cache and CDN My problem is that after 55 hits per second according to blitz.io varnish starts giving out timeouts. CPU usage at this time is hardly 1%. Free memory at all time remains 10GB+. I tried benchmarking php-fpm directly with result of 150hits/s without any timeouts. But after that the CPU usage goes 100% and it stops responding. Can you help me optimize it to handle more? As i understand nginx has nothing to do over here so i dont put its config. php-fpm config listen = /tmp/php5-fpm.sock listen.allowed_clients = 127.0.0.1 user = nginx group = nginx pm = dynamic pm.max_children = 150 pm.start_servers = 7 pm.min_spare_servers = 2 pm.max_spare_servers = 15 pm.max_requests = 500 slowlog = /var/log/php-fpm/www-slow.log php_admin_value[error_log] = /var/log/php-fpm/www-error.log php_admin_flag[log_errors] = on apc extension = apc.so apc.enabled=1 apc.shm_size=512MB apc.num_files_hint=0 apc.user_entries_hint=0 apc.ttl=7200 apc.use_request_time=1 apc.user_ttl=7200 apc.gc_ttl=3600 apc.cache_by_default=1 apc.filters apc.mmap_file_mask=/tmp/apc.XXXXXX apc.file_update_protection=2 apc.enable_cli=0 apc.max_file_size=1M apc.stat=1 apc.stat_ctime=0 apc.canonicalize=0 apc.write_lock=1 apc.report_autofilter=0 apc.rfc1867=0 apc.rfc1867_prefix =upload_ apc.rfc1867_name=APC_UPLOAD_PROGRESS apc.rfc1867_freq=0 apc.rfc1867_ttl=3600 apc.include_once_override=0 apc.lazy_classes=0 apc.lazy_functions=0 apc.coredump_unmap=0 apc.file_md5=0 apc.preload_path Varnish VCL backend default { .host = "127.0.0.1"; .port = "8080"; .connect_timeout = 6s; .first_byte_timeout = 6s; .between_bytes_timeout = 60s; } acl purgehosts { "localhost"; "127.0.0.1"; } # Called after a document has been successfully retrieved from the backend. sub vcl_fetch { # Uncomment to make the default cache "time to live" is 5 minutes, handy # but it may cache stale pages unless purged. (TODO) # By default Varnish will use the headers sent to it by Apache (the backend server) # to figure out the correct TTL. # WP Super Cache sends a TTL of 3 seconds, set in wp-content/cache/.htaccess set beresp.ttl = 24h; # Strip cookies for static files and set a long cache expiry time. if (req.url ~ "\.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|pdf|txt|tar|wav|bmp|rtf|js|flv|swf|html|htm)$") { unset beresp.http.set-cookie; set beresp.ttl = 24h; } # If WordPress cookies found then page is not cacheable if (req.http.Cookie ~"(wp-postpass|wordpress_logged_in|comment_author_)") { # set beresp.cacheable = false;#versions less than 3 #beresp.ttl>0 is cacheable so 0 will not be cached set beresp.ttl = 0s; } else { #set beresp.cacheable = true; set beresp.ttl=24h;#cache for 24hrs } # Varnish determined the object was not cacheable #if ttl is not > 0 seconds then it is cachebale if (!beresp.ttl > 0s) { # set beresp.http.X-Cacheable = "NO:Not Cacheable"; } else if ( req.http.Cookie ~"(wp-postpass|wordpress_logged_in|comment_author_)" ) { # You don't wish to cache content for logged in users set beresp.http.X-Cacheable = "NO:Got Session"; return(hit_for_pass); #previously just pass but changed in v3+ } else if ( beresp.http.Cache-Control ~ "private") { # You are respecting the Cache-Control=private header from the backend set beresp.http.X-Cacheable = "NO:Cache-Control=private"; return(hit_for_pass); } else if ( beresp.ttl < 1s ) { # You are extending the lifetime of the object artificially set beresp.ttl = 300s; set beresp.grace = 300s; set beresp.http.X-Cacheable = "YES:Forced"; } else { # Varnish determined the object was cacheable set beresp.http.X-Cacheable = "YES"; if (beresp.status == 404 || beresp.status >= 500) { set beresp.ttl = 0s; } # Deliver the content return(deliver); } sub vcl_hash { # Each cached page has to be identified by a key that unlocks it. # Add the browser cookie only if a WordPress cookie found. if ( req.http.Cookie ~"(wp-postpass|wordpress_logged_in|comment_author_)" ) { #set req.hash += req.http.Cookie; hash_data(req.http.Cookie); } } # vcl_recv is called whenever a request is received sub vcl_recv { # remove ?ver=xxxxx strings from urls so css and js files are cached. # Watch out when upgrading WordPress, need to restart Varnish or flush cache. set req.url = regsub(req.url, "\?ver=.*$", ""); # Remove "replytocom" from requests to make caching better. set req.url = regsub(req.url, "\?replytocom=.*$", ""); remove req.http.X-Forwarded-For; set req.http.X-Forwarded-For = client.ip; # Exclude this site because it breaks if cached if ( req.http.host == "sr.ituts.gr" ) { return( pass ); } # Serve objects up to 2 minutes past their expiry if the backend is slow to respond. set req.grace = 120s; # Strip cookies for static files: if (req.url ~ "\.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|pdf|txt|tar|wav|bmp|rtf|js|flv|swf|html|htm)$") { unset req.http.Cookie; return(lookup); } # Remove has_js and Google Analytics __* cookies. set req.http.Cookie = regsuball(req.http.Cookie, "(^|;\s*)(__[a-z]+|has_js)=[^;]*", ""); # Remove a ";" prefix, if present. set req.http.Cookie = regsub(req.http.Cookie, "^;\s*", ""); # Remove empty cookies. if (req.http.Cookie ~ "^\s*$") { unset req.http.Cookie; } if (req.request == "PURGE") { if (!client.ip ~ purgehosts) { error 405 "Not allowed."; } #previous version ban() was purge() ban("req.url ~ " + req.url + " && req.http.host == " + req.http.host); error 200 "Purged."; } # Pass anything other than GET and HEAD directly. if (req.request != "GET" && req.request != "HEAD") { return( pass ); } /* We only deal with GET and HEAD by default */ # remove cookies for comments cookie to make caching better. set req.http.cookie = regsub(req.http.cookie, "1231111111111111122222222333333=[^;]+(; )?", ""); # never cache the admin pages, or the server-status page, or your feed? you may want to..i don't if (req.request == "GET" && (req.url ~ "(wp-admin|bb-admin|server-status|feed)")) { return(pipe); } # don't cache authenticated sessions if (req.http.Cookie && req.http.Cookie ~ "(wordpress_|PHPSESSID)") { return(lookup); } # don't cache ajax requests if(req.http.X-Requested-With == "XMLHttpRequest" || req.url ~ "nocache" || req.url ~ "(control.php|wp-comments-post.php|wp-login.php|bb-login.php|bb-reset-password.php|register.php)") { return (pass); } return( lookup ); } Varnish Daemon options DAEMON_OPTS="-a :80 \ -T 127.0.0.1:6082 \ -f /etc/varnish/ituts.vcl \ -u varnish -g varnish \ -S /etc/varnish/secret \ -p thread_pool_add_delay=2 \ -p thread_pools=8 \ -p thread_pool_min=100 \ -p thread_pool_max=1000 \ -p session_linger=50 \ -p session_max=150000 \ -p sess_workspace=262144 \ -s malloc,5G" Im not sure where to start, should i for start optimize php-fpm and then go to varnish or php-fpm is at its max right now so i should start looking for the problem in varnish?

    Read the article

  • When Drag a object from one div to another , format gets changed

    - by Anuj
    When a object is dragged and dropped from one div to another, the format in li gets changes to text only. I want it in the same format i.e 'li' after droping it. $(function() { $( "#catalog ul" ).sortable({ zIndex: 10000, revert: true }); $( "#catalog" ).accordion(); $( "#catalog ul" ).draggable({ appendTo: "body", helper: "clone", zIndex: 10000 }); $( "#dialogIteration ol" ).droppable({ activeClass: "ui-state-default", hoverClass: "ui-state-hover", drop: function( event, ui ) { $( this ).find( ".placeholder" ).remove(); $( "<li></li>" ).text( ui.draggable.text() ).appendTo( this ); } }).sortable({ items: "li:not(.placeholder)", sort: function() { // gets added unintentionally by droppable interacting with sortable // using connectWithSortable fixes this, but doesn't allow you to customize active/hoverClass options $( this ).removeClass( "ui-state-default" ); } }); $( "ul, li" ).disableSelection(); $("#dialogIteration").dialog(); }); Demo: http://jsfiddle.net/coolanuj/7683X/7/

    Read the article

  • CMS for news editing

    - by Genrih
    It is necessary to convert the site into CMS. One of the main features - edit the articles of some type, and to do so will be non-programmers, that is, the simpler the better. Ideally - no editor like ckeditor, but simply wizard, where its possible to insert images, text, title text, and it's all displayed in the page default template. Wizard can be implemented by me, let it be just not very difficult to change the default editor. Also multilanguage support is required. What can you advise?

    Read the article

  • My ASP.NET Web Application cannot 'find' any of my classes in the App_Code folder .. ??

    - by Pure.Krome
    Hi folks, I'm trying to make a new asp.net web application .. so I'm copying my files from one site to the new one, in the same solution. Now, any of my classes in the App_Code directory ... they are not getting 'picked up' by the rest of my project. For example... \_ \_App_Code |_ BaseMasterPage.cs (please don't ask why this is in here..) |_ Utility.cs |_ FooBar.cs \_MasterPages |_ Default.master.cs // This file errors ;( namespace Foo.WebSite.MasterPages { public partial class Default_master : App_Code.BaseMasterPage { ... } } namespace Foo.WebSite.App_Code { public class BaseMasterPage : MasterPage { .. } } It cannot find the App_Code.BaseMasterPage (compilation and intellisence error) in the Default.master.cs page. Can someone please help? this is killing me :(

    Read the article

  • AccessControlException: access denied - caller function failed to load properties file

    - by Michael Mao
    Hi all: I am having a jar archive environment which is gonna call my class in a folder like this: java -jar "emarket.jar" ../tournament 100 My compiled class is deployed into the ../tournament folder, this command runs well. After I changed my code to load a properties file, it gets the following exception message: Exception in thread "main" java.security.AccessControlException: access denied (java.io.FilePermission agent.properties read) at java.security.AccessControlContext.checkPermission(Unknown Source) at java.security.AccessController.checkPermission(Unknown Source) at java.lang.SecurityManager.checkPermission(Unknown Source) at java.lang.SecurityManager.checkRead(Unknown Source) at java.io.FileInputStream.<init>(Unknown Source) at java.io.FileInputStream.<init>(Unknown Source) at Agent10479475.getPropertiesFromConfigFile(Agent10479475.java:110) at Agent10479475.<init>(Agent10479475.java:100) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(Unknown Source) at java.lang.reflect.Constructor.newInstance(Unknown Source) at java.lang.Class.newInstance0(Unknown Source) at java.lang.Class.newInstance(Unknown Source) at emarket.client.EmarketSandbox.instantiateClientObjects(EmarketSandbox.java:92) at emarket.client.EmarketSandbox.<init>(EmarketSandbox.java:27) at emarket.client.EmarketSandbox.main(EmarketSandbox.java:166) I am wondering why this security checking will fail. I issue the getPropertitiesFromConfigFile() function inside my class's default constructor, like this: public class Agent10479475 extends AbstractClientAgent { //default constructor public Agent10479475() { //set all properties to their default values in constructor FT_THRESHOLD = 400; FT_THRESHOLD_MARGIN = 50; printOut("Now loading properties from a config file...", ""); getPropertiesFromConfigFile(); printOut("Finished loading",""); } private void getPropertiesFromConfigFile() { Properties props = new Properties(); try { props.load(new FileInputStream("agent.properties")); FT_THRESHOLD = Long.parseLong(props.getProperty("FT_THRESHOLD")); FT_THRESHOLD_MARGIN = Long.parseLong(props.getProperty("FT_THRESHOLD_MARGIN ")); } catch(java.io.FileNotFoundException fnfex) { printOut("CANNOT FIND PROPERTIES FILE :", fnfex); } catch(java.io.IOException ioex) { printOut("IOEXCEPTION OCCURED :", ioex); } } } My class is loading its own .properties file under the same folder. why would the Java environment complains about such a denial of access? Must I config the emarket.client.EmarketSandbox class, which is not written by me and stored inside the emarket.jar, to access my agent.properties file? Any hints or suggestions is much appreciated. Many thanks in advance.

    Read the article

  • How to open window pop-up from Silverlight Out-of-Browser?

    - by Jeaffrey Gilbert
    I need to open window pop-up from Silverlight Out-of-Browser application. I've added parameter <param name="enablehtmlaccess" value="true" /> in Index.html, but executing this from code behind: HtmlPage.Window.Navigate(new Uri(myUrl), "_blank", myFeatures); still returns error: Silverlight OOB Error: The DOM/scripting bridge is disabled. I've read about this post, does it mean that I can't open pop-up from OOB? Why I need to do this, because actually I've shown the HTML page in OOB Silverlight by WebBrowser control within ChildWindow but when I click an anchor in HTML page, which linked to _blank page, it jumps to my default browser. It doesn't meet the requirement, except launch that HTML index page also in default browser at the first time, triggered from button control in OOB Silverlight. Is that possible? Please advice, thanks.

    Read the article

  • What is the correct usage of blueprint-typography-body([$font-size])?

    - by Alexis Abril
    Recent convert to RoR and I've been using Compass w/ Blueprint to dip into the proverbial pool. Compass has been fantastic, but I've come across something strange within the Typography library. The blueprint-typography-body mixin contains the following: =blueprint-typography-body($font-size: $blueprint-font-size) line-height: 1.5 +normal-text font-size: 100% * $font-size / 16px My question revolves around "font-size." I'm a bit lost, as I would expect to pass in a font size and have that size reflected upon page load. However, in this scenario the formula seems to dictate a percentage against the default font. ie: +blueprint-typography-body(10px) //produces 7.5px off of the default font size of 12px from what I can tell. In essence, I'm curious if there is a standard to setting font size within Compass other than explicitly declaring "font-size: 10px". Note: The reason I'm leaning towards Blueprint/Compass font stylings is due to the standardization of line-heights, fonts and colors.

    Read the article

  • Change Titlewindow close button

    - by Cameigons
    I'm working with Flex 3.4 SDK. I need to change the default close button image from a TitleWindow. So what I'm doing is defining a CSS selector, like this: TitleWindow{ close-button-skin: Embed('assets/close.png'); border-color: #FFFFFF; corner-radius: 10; closeButtonDisabledSkin: ClassReference(null); closeButtonDownSkin: ClassReference(null); closeButtonOverSkin: ClassReference(null); closeButtonUpSkin: ClassReference(null); } The problem is: the result image is totally squeezed beyond recognition. Probably because the image dimensions are 55x10 pixels (much wider than the default closebutton square-like dimensions) and flex forces it to fit that size. Would anyone know how to go about fixing that?

    Read the article

  • Source-control 'wet-work'?

    - by Phil Factor
    When a design or creative work is flawed beyond remedy, it is often best to destroy it and start again. The other day, I lost the code to a long and intricate SQL batch I was working on. I’d thought it was impossible, but it happened. With all the technology around that is designed to prevent this occurring, this sort of accident has become a rare event.  If it weren’t for a deranged laptop, and my distraction, the code wouldn’t have been lost this time.  As always, I sighed, had a soothing cup of tea, and typed it all in again.  The new code I hastily tapped in  was much better: I’d held in my head the essence of how the code should work rather than the details: I now knew for certain  the start point, the end, and how it should be achieved. Instantly the detritus of half-baked thoughts fell away and I was able to write logical code that performed better.  Because I could work so quickly, I was able to hold the details of all the columns and variables in my head, and the dynamics of the flow of data. It was, in fact, easier and quicker to start from scratch rather than tidy up and refactor the existing code with its inevitable fumbling and half-baked ideas. What a shame that technology is now so good that developers rarely experience the cleansing shock of losing one’s code and having to rewrite it from scratch.  If you’ve never accidentally lost  your code, then it is worth doing it deliberately once for the experience. Creative people have, until Technology mistakenly prevented it, torn up their drafts or sketches, threw them in the bin, and started again from scratch.  Leonardo’s obsessive reworking of the Mona Lisa was renowned because it was so unusual:  Most artists have been utterly ruthless in destroying work that didn’t quite make it. Authors are particularly keen on writing afresh, and the results are generally positive. Lawrence of Arabia actually lost the entire 250,000 word manuscript of ‘The Seven Pillars of Wisdom’ by accidentally leaving it on a train at Reading station, before rewriting a much better version.  Now, any writer or artist is seduced by technology into altering or refining their work rather than casting it dramatically in the bin or setting a light to it on a bonfire, and rewriting it from the blank page.  It is easy to pick away at a flawed work, but the real creative process is far more brutal. Once, many years ago whilst running a software house that supplied commercial software to local businesses, I’d been supervising an accounting system for a farming cooperative. No packaged system met their needs, and it was all hand-cut code.  For us, it represented a breakthrough as it was for a government organisation, and success would guarantee more contracts. As you’ve probably guessed, the code got mangled in a disk crash just a week before the deadline for delivery, and the many backups all proved to be entirely corrupted by a faulty tape drive.  There were some fragments left on individual machines, but they were all of different versions.  The developers were in despair.  Strangely, I managed to re-write the bulk of a three-month project in a manic and caffeine-soaked weekend.  Sure, that elegant universally-applicable input-form routine was‘nt quite so elegant, but it didn’t really need to be as we knew what forms it needed to support.  Yes, the code lacked architectural elegance and reusability. By dawn on Monday, the application passed its integration tests. The developers rose to the occasion after I’d collapsed, and tidied up what I’d done, though they were reproachful that some of the style and elegance had gone out of the application. By the delivery date, we were able to install it. It was a smaller, faster application than the beta they’d seen and the user-interface had a new, rather Spartan, appearance that we swore was done to conform to the latest in user-interface guidelines. (we switched to Helvetica font to look more ‘Bauhaus’ ). The client was so delighted that he forgave the new bugs that had crept in. I still have the disk that crashed, up in the attic. In IT, we have had mixed experiences from complete re-writes. Lotus 123 never really recovered from a complete rewrite from assembler into C, Borland made the mistake with Arago and Quattro Pro  and Netscape’s complete rewrite of their Navigator 4 browser was a white-knuckle ride. In all cases, the decision to rewrite was a result of extreme circumstances where no other course of action seemed possible.   The rewrite didn’t come out of the blue. I prefer to remember the rewrite of Minix by young Linus Torvalds, or the rewrite of Bitkeeper by a slightly older Linus.  The rewrite of CP/M didn’t do too badly either, did it? Come to think of it, the guy who decided to rewrite the windowing system of the Xerox Star never regretted the decision. I’ll agree that one should often resist calls for a rewrite. One of the worst habits of the more inexperienced programmer is to denigrate whatever code he or she inherits, and then call loudly for a complete rewrite. They are buoyed up by the mistaken belief that they can do better. This, however, is a different psychological phenomenon, more related to the idea of some motorcyclists that they are operating on infinite lives, or the occasional squaddies that if they charge the machine-guns determinedly enough all will be well. Grim experience brings out the humility in any experienced programmer.  I’m referring to quite different circumstances here. Where a team knows the requirements perfectly, are of one mind on methodology and coding standards, and they already have a solution, then what is wrong with considering  a complete rewrite? Rewrites are so painful in the early stages, until that point where one realises the payoff, that even I quail at the thought. One needs a natural disaster to push one over the edge. The trouble is that source-control systems, and disaster recovery systems, are just too good nowadays.   If I were to lose this draft of this very blog post, I know I’d rewrite it much better. However, if you read this, you’ll know I didn’t have the nerve to delete it and start again.  There was a time that one prayed that unreliable hardware would deliver you from an unmaintainable mess of a codebase, but now technology has made us almost entirely immune to such a merciful act of God. An old friend of mine with long experience in the software industry has long had the idea of the ‘source-control wet-work’,  where one hires a malicious hacker in some wild eastern country to hack into one’s own  source control system to destroy all trace of the source to an application. Alas, backup systems are just too good to make this any more than a pipedream. Somehow, it would be difficult to promote the idea. As an alternative, could one construct a source control system that, on doing all the code-quality metrics, would systematically destroy all trace of source code that failed the quality test? Alas, I can’t see many managers buying into the idea. In reading the full story of the near-loss of Toy Story 2, it set me thinking. It turned out that the lucky restoration of the code wasn’t the happy ending one first imagined it to be, because they eventually came to the conclusion that the plot was fundamentally flawed and it all had to be rewritten anyway.  Was this an early  case of the ‘source-control wet-job’?’ It is very hard nowadays to do a rapid U-turn in a development project because we are far too prone to cling to our existing source-code.

    Read the article

< Previous Page | 258 259 260 261 262 263 264 265 266 267 268 269  | Next Page >