sparsefilt
Feature extraction by using sparse filtering
Description
returns a sparse filtering model object that contains the results from applying sparse filtering to the table or matrix of predictor dataMdl
= sparsefilt(X
,q
)X
containingpvariables.q
is the number of features to extract fromX
, thereforesparsefilt
learns ap-by-q
matrix of transformation weights. For undercomplete or overcomplete feature representations,q
can be less than or greater than the number of predictor variables, respectively.
To access the learned transformation weights, use
Mdl.TransformWeights
.To transform
X
to the new set of features by using the learned transformation, passMdl
andX
totransform
.
uses additional options specified by one or moreMdl
= sparsefilt(X
,q
,Name,Value
)Name,Value
pair arguments. For example, you can standardize the predictor data or applyL2regularization.
Examples
Create Sparse Filter
Create aSparseFiltering
object by using thesparsefilt
function.
Load theSampleImagePatches
image patches.
data = load('SampleImagePatches'); size(data.X)
ans =1×25000 363
There are 5,000 image patches, each containing 363 features.
Extract 100 features from the data.
rngdefault% For reproducibilityQ = 100; obj = sparsefilt(data.X,Q,'IterationLimit',100)
警告:解决LBFGS无法收敛a solution.
obj = SparseFiltering ModelParameters: [1x1 struct] NumPredictors: 363 NumLearnedFeatures: 100 Mu: [] Sigma: [] FitInfo: [1x1 struct] TransformWeights: [363x100 double] InitialTransformWeights: [] Properties, Methods
sparsefilt
issues a warning because it stopped due to reaching the iteration limit, instead of reaching a step-size limit or a gradient-size limit. You can still use the learned features in the returned object by calling thetransform
function.
Restartsparsefilt
Continue optimizing a sparse filter.
Load theSampleImagePatches
image patches.
data = load('SampleImagePatches'); size(data.X)
ans =1×25000 363
There are 5,000 image patches, each containing 363 features.
Extract 100 features from the data and use an iteration limit of 20.
rngdefault% For reproducibilityq = 100; Mdl = sparsefilt(data.X,q,'IterationLimit',20);
警告:解决LBFGS无法收敛a solution.
View the resulting transformation matrix as image patches.
wts = Mdl.TransformWeights; W = reshape(wts,[11,11,3,q]); [dx,dy,~,~] = size(W);forf = 1:q Wvec = W(:,:,:,f); Wvec = Wvec(:); Wvec =(Wvec - min(Wvec))/(max(Wvec) - min(Wvec)); W(:,:,:,f) = reshape(Wvec,dx,dy,3);endm = ceil(sqrt(q)); n = m; img = zeros(m*dx,n*dy,3); f = 1;fori = 1:mforj = 1:nif(f <= q) img((i-1)*dx+1:i*dx,(j-1)*dy+1:j*dy,:) = W(:,:,:,f); f = f+1;endendendimshow(img,'InitialMagnification',300);
The image patches appear noisy. To clean up the noise, try more iterations. Restart the optimization from where it stopped for another 40 iterations.
Mdl = sparsefilt(data.X,q,'IterationLimit',40,'InitialTransformWeights',wts);
警告:解决LBFGS无法收敛a solution.
View the updated transformation matrix as image patches.
wts = Mdl.TransformWeights; W = reshape(wts,[11,11,3,q]); [dx,dy,~,~] = size(W);forf = 1:q Wvec = W(:,:,:,f); Wvec = Wvec(:); Wvec =(Wvec - min(Wvec))/(max(Wvec) - min(Wvec)); W(:,:,:,f) = reshape(Wvec,dx,dy,3);endm = ceil(sqrt(q)); n = m; img = zeros(m*dx,n*dy,3); f = 1;fori = 1:mforj = 1:nif(f <= q) img((i-1)*dx+1:i*dx,(j-1)*dy+1:j*dy,:) = W(:,:,:,f); f = f+1;endendendimshow(img,'InitialMagnification',300);
These images are less noisy.
Input Arguments
X
—Predictor data
numeric matrix|table
Predictor data, specified as ann-by-pnumeric matrix or table. Rows correspond to individual observations and columns correspond to individual predictor variables. IfX
is a table, then all of its variables must be numeric vectors.
Data Types:single
|double
|table
q
—Number of features to extract
positive integer
Number of features to extract from the predictor data, specified as a positive integer.
sparsefilt
stores ap-by-q
transform weight matrix inMdl.TransformWeights
. Therefore, setting very large values forq
can result in greater memory consumption and increased computation time.
Data Types:single
|double
Name-Value Arguments
Specify optional pairs of arguments asName1=Value1,...,NameN=ValueN
, whereName
is the argument name andValue
is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter.
Before R2021a, use commas to separate each name and value, and encloseName
in quotes.
Example:'Standardize',true,'Lambda',1
standardizes the predictor data and applies a penalty of1
to the transform weight matrix.
IterationLimit
—Maximum number of iterations
1000
(default) |positive integer
Maximum number of iterations, specified as the comma-separated pair consisting of'IterationLimit'
and a positive integer.
Example:'IterationLimit',1e6
Data Types:single
|double
VerbosityLevel
—Verbosity level
0
(default) |nonnegative integer
Verbosity level for monitoring algorithm convergence, specified as the comma-separated pair consisting of'VerbosityLevel'
and a value in this table.
Value | Description |
---|---|
0 |
sparsefilt does not display convergence information at the command line. |
Positive integer | sparsefilt displays convergence information at the command line. |
Convergence Information
Heading | Meaning |
---|---|
FUN VALUE |
Objective function value. |
NORM GRAD |
Norm of the gradient of the objective function. |
NORM STEP |
Norm of the iterative step, meaning the distance between the previous point and the current point. |
CURV |
OK means the weak Wolfe condition is satisfied. This condition is a combination of sufficient decrease of the objective function and a curvature condition. |
GAMMA |
Inner product of the step times the gradient difference, divided by the inner product of the gradient difference with itself. The gradient difference is the gradient at the current point minus the gradient at the previous point. Gives diagnostic information on the objective function curvature. |
ALPHA |
Step direction multiplier, which differs from1 when the algorithm performed a line search. |
ACCEPT |
YES means the algorithm found an acceptable step to take. |
Example:'VerbosityLevel',1
Data Types:single
|double
Lambda
—L2regularization coefficient value
0
(default) |positive numeric scalar
L2regularization coefficient value for the transform weight matrix, specified as the comma-separated pair consisting of'Lambda'
scala和积极的数字r. If you specify0
, the default, then there is no regularization term in the objective function.
Example:'Lambda',0.1
Data Types:single
|double
Standardize
—Flag to standardize predictor data
false
(default) |true
Flag to standardize the predictor data, specified as the comma-separated pair consisting of'Standardize'
andtrue
(1
) orfalse
(0
).
IfStandardize
istrue
, then:
Example:'Standardize',true
Data Types:logical
InitialTransformWeights
—Transformation weights that initialize optimization
randn(p,q)
(default) |numeric matrix
Transformation weights that initialize optimization, specified as the comma-separated pair consisting of'InitialTransformWeights'
and ap-by-q
numeric matrix.pmust be the number of columns or variables inX
andq
is the value ofq
.
Tip
You can continue optimizing a previously returned transform weight matrix by passing it as an initial value in another call tosparsefilt
. The output model objectMdl
stores a learned transform weight matrix in theTransformWeights
property.
Example:'InitialTransformWeights',Mdl.TransformWeights
Data Types:single
|double
GradientTolerance
—Relative convergence tolerance on gradient norm
1e-6
(default) |positive numeric scalar
Relative convergence tolerance on gradient norm, specified as the comma-separated pair consisting of'GradientTolerance'
scala和积极的数字r. This gradient is the gradient of the objective function.
Example:'GradientTolerance',1e-4
Data Types:single
|double
StepTolerance
—Absolute convergence tolerance on step size
1e-6
(default) |positive numeric scalar
Absolute convergence tolerance on the step size, specified as the comma-separated pair consisting of'StepTolerance'
scala和积极的数字r.
Example:'StepTolerance',1e-4
Data Types:single
|double
Output Arguments
Mdl
— Learned sparse filtering model
SparseFiltering
model object
Learned sparse filtering model, returned as aSparseFiltering
model object.
To access properties ofMdl
,使用点符号。对进行了le:
To access the learned transform weights, use
Mdl.TransformWeights
.To access the fitting information structure, use
Mdl.FitInfo
.
To find sparse filtering coefficients for new data, use thetransform
function.
Algorithms
Thesparsefilt
function creates a nonlinear transformation of input features to output features. The transformation is based on optimizing an objective function that encourages the representation of each example by as few output features as possible while at the same time keeping the output features equally active across examples.
For details, seeSparse Filtering Algorithm.
Version History
Open Example
You have a modified version of this example. Do you want to open this example with your edits?
MATLAB Command
You clicked a link that corresponds to this MATLAB command:
Run the command by entering it in the MATLAB Command Window. Web browsers do not support MATLAB commands.
Select a Web Site
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select:.
You can also select a web site from the following list:
How to Get Best Site Performance
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.
Americas
- América Latina(Español)
- Canada(English)
- United States(English)
Europe
- Belgium(English)
- Denmark(English)
- Deutschland(Deutsch)
- España(Español)
- Finland(English)
- France(Français)
- Ireland(English)
- Italia(Italiano)
- Luxembourg(English)
- Netherlands(English)
- Norway(English)
- Österreich(Deutsch)
- Portugal(English)
- Sweden(English)
- Switzerland
- United Kingdom(English)